Saturday, May 8, 2010

Lecture #23: Cryptography

Cryptography is necessary when transmitting confidential information across a network. It helps prevent the compramize of data from attacks such as eavesdropping, active insertion, impersonnation, etc. There are two main types of cryptography used today: symmetric key cryptography, and public-key cryptography.

Symmetric key crytography uses ciphers to encrypt/decrypt messages. It requires both parties to use a common key. Therefore, the issue arrises of how to share the common key between the parties.

Public key encryption uses two keys: a public key which is publically available, and a private key, which is used to decrypt messages encoded using the public key. It is nearly impossible to determine the private key for any given public key. Because public key encryption is more computationally expensive, two parties will often exchange a symmetric key using public key encryption, and then continue to use symmetric key cryptography for the remainder of the secure session.

Wednesday, May 5, 2010

Lecture 24: Secure Communications

The first thing we talked about in this lecture was message integrity. We need to be able to verify that a message, once received, is in its original form and has been received unmodified. This task is accomplished by generating a message digest for the message at its origin before it is sent and generating another one once it has been received and then compare the two digests. Two such algorithms were mentioned: MD5 which has a 128-bit message digest for output and SHA-1 which has a 160-bit message digest output. When then discussed MAC or message authentication codes which is basically a hash of the message being sent that is keyed by a value that is shared between the sender and the receiver, this provides a guarantee for who created the message but not necessarily who sent it. This problem can be solved by using nonce values as part of the key used in the hash to prevent playback attacks.
Later we discussed digital signatures, which is another way to verify the origin of a message. For short messages it is sufficient to encrypt a message using one's private key before sending it to a recipient who can then verify the sender by decrypting it using the public key. However, for large messages this is not efficient, so instead of signing the message itself, one can generate a message digest for the message and sign the digest instead. The recipient can then verify a message by decrypting the signed hash value as well as generating a message digest for the received message and check if they match.
Then we went over public key certification and authorities. Basically, without them there is no way to know if a provided public key belongs to the person that provided it. What is done now, is public keys are submitted to a certification authority who verifies the identity of the person that submitted the key, then signs it using their own private key, which is the certificate. So when someone wants someone else's public key, they can get a certificate from either that person or another source, decrypt it using the CA's public key, and they will know that the key they have is the legitimate public key of the other person.
The second to last topic that was covered was SSL, which is a security protocol that is implemented above the transport layer and provides confidentiality, integrity, and authentication. SSL supports many encryption algorithms, meaning that a client and server must decide on which one to use, this is done by a client sending a server a list of acceptable cipher suites and the server will then pick a one and tell the client which one it chose. SSL communication starts with a handshake during which the identity of the server is authenticated via public key certificate, nonces are exchanged along with cipher suite choice, as well as MACs of all messages passed during handshaking. The encryption and MAC keys are computed using the information that is exchange during the handshake, the MACs of the handshake messages are used to verify that no one has modified the initial handshake messages that were not encrypted. To detect the tampering of data during communication, all MACs are computing using a sequence number, MAC key, message type, and the data being carried in the packet.
The last thing we discussed was IPsec which is implemented above the network layer and provides for data integrity, origin authentication, replay attack prevention, and confidentiality. IPsec has two modes of operation, the first is Transport mode in which the end systems take care of IPsec, the other is tunneling mode in which IPsec is handed by the routers (first hop routers from each of the end systems). There are then two different protocols that can be used in IPsec, the first is Authentication Header which provides for the authentication of the source as well as the integrity of the data being transmitted, however, there is no confidentiality. The second is Encapsulation Security Protocol which provides all the features that AH does but adds confidentiality. The most popular service used is Tunneling with ESP. When using Tunneling with ESP the original IP datagram gets encrypted using an agreed upon algorithm, a sequence number and an SPI (Security Parameter Index, basically an index for a lookup table for a router to figure out how to process the packet) are appended before the encrypted packet. Then the whole datagram gets run through a MAC algorithm, whose results are then appended to the message, and finally a new IP header is attached to the beginning of the new message before it gets sent. A receiving router will then have to use the SPI in the packet to figure out how to handle the payload, verify its integrity, then send the original packet to its intended destination.

Sunday, May 2, 2010

Lecture 22: Network Security

In this lecture, professor Gunes talked about Network Security.

There is no security in internet because of its initial design. Early security flaws involved phone phreaking, where when you whistle a correct tone into the phone, you could reset the trunk lines. Robert Morris created a worm in 1988 that infected computers to see how many computers that were on the internet. Due to poor coding, he brought down around 6,000 computers on the internet. Kevin Mitnick was the first hacker on FBI's Most Wanted list. He stole many credit cards and served time for his crimes. He now a security consultant.

Some worms in history, including the Sapphire Worm, was the fast computer worm in history. Infect more than 90 percent of vulnerable hosts within 10 minutes. Back in the day, patches and system updates were more "optional", there were no automatic updates. This lead to huge problems, because people that weren't computer people, wouldn't update their software. DoS attacks involve something sending out bogus requests to overload a system.

The number of desktops grew exponentially in the 80s, but there were still no emphasis on security. It wasn't initially designed for commercial purposes. It was designed for a group of people with mutually trusting users.

For parties of different services (the provider, the user, the attacker), they all have different concerns about what they would like to protect. In the bank example in the slides, the bank service provider wants to protect their money, where users should not be able to change the amount of money whenever they want. The good guys have to think like the bad guys to protect what they are planning to do.

The basic security services that are essential in network communication are: authentication, authorization, availability, confidentially, integrity, and non-repudiation. Different types of security attacks are passive and active attacks, Passive attacks include message eaves dropping and monitoring transmissions, while active attacks include masquerade, replay, modification of message contents, and denial of service attacks, the general modification of the data stream.

Wednesday, April 28, 2010

Overview Session

We will have an overview session on Thursday, May 6th at 12pm.
The session is mainly to help review material and attendance is optional.

Friday, April 16, 2010

“Survivable Routing in Multi-hop Wireless Networks” by Dr. Srikanth Krishnamurthy

Today we listened to Dr. Krishnamurthy give a talk on “Survivable routing in multi-hop wireless networks.” Multi-hop wireless networks have more than one node along its path to send a packet wirelessly from one point to another. This is much more cost effective than running cable because it consists of cheap hardware, it is quickly and easily extendable, it is easily manageable and reliable, and it is built rapidly. Many test beds are in use currently at MIT, U of Utah, UCR, and Georgia Tech. There are also public deployments being run in such places as Singapore.
Even though there is much research and work done with wireless multi-hop networks there are still many issues remaining. The issues are routing quality, reliability and security.
Dr. Krishnamurthy talked about ETX(Expected transmission count) routing metric. This is used to measure the quality of a path between two nodes in a wireless packet network. Facts that need to be considered when looking at this are order matters and the security. This metric does not take into account switching link positions would equals different costs. And it also doesn’t account for a finite number of retransmissions where a packet is dropped changes costs. These all degrade the reliability and quality of the transmission. ETX was designed to improve transmission but it does not cover security.
Dr. Krishnamurthy introduced ETOP. This takes into account all the issues that ETX does not cover; node order, dropped packets and security. The estimated cost of n-hop path is expected to be the number of transmissions plus the retransmissions required to deliver a packet over that particular path. Performance results give a 65% improvement over ETX routing for paths that are separated by 3 or more hops. TCP behavior with ETOP gives higher reliability with ETOP allowing TCP to be more aggressive and ramp up its congestion window. TCP transmission time improves.
But there are security issues that need to be address with ETOP. Dr. Krishnamurthy addresses the issues of vulnerable attacks on the system paths. Some solutions would be when sending out probes that each carry a message. Then the reply to the probes would not only be the probe number but the message value. Another is to respond on only certain channels in the system. These solutions would throw off attackers who were trying to fake link quality metrics to attack routes.
All in all Dr. Krishnamurthy feels that ETOP is a better, more reliable and secure than ETX.

Wednesday, April 14, 2010

Colloquium: "Survivable Routing in Multi-hop Wireless Networks"

Today's class was mostly on Prof. Krishnamurthy's talk on routing within a multi-hop wireless network. Multi-hop wireless networks are networks involving multiple static routers connecting wirelessly over some area, and are commonly seen in things like city-wide wireless networks, campus networks, surveillance, and the military. Research on this kind of networks is being carried out in many places, involving Rutgers, MIT, and UCR. What separates these networks from wired networks is the fact that spatial distance and arrangement matter.

Currently, the most popular protocol uses ETX (Expected Transmission Count) as a metric of connection quality. ETX uses number of expected number of packets to send one good packet as weight on a network graph connection. However, most protocols put too much weight on number of hops needed to a destination, and so favors longer, less reliable hops over short, very reliable hops between routers. This forces each router to send more packets, expecting them to not arrive successfully. In addition, ETX is blind to where unreliable connections are located – lost packets toward the end of a path means that a failure message will have to travel all the way back to the sender, causing more network congestion.

Prof. Krishnamurthy proposes ETOP, a protocol that takes into account both the number of hops, reliability, and the location of unreliable connections. The metric uses probe packets to determine where unreliable connections are and places greater cost toward unreliable segments which are farther in the path. Because of this, the protocol generates non-commutative paths (Paths from routers A to B aren't necessarily the same as B to A). Regardless, he has proven that greedy algorithms are usable for determining paths given this metric, and so Dijkstra's algorithm is usable for the protocol.

ETOP on average shows better goodput (useful bandwidth) compared to ETX, especially for multi-hop paths. The protocol interacts somewhat chaotically with TCP, making its congestion window fluctuate wildly, but almost always better goodput nonetheless. ETOP also shows worse round trip time since it may favor paths with more hops.

Prof. Krishnamurthy also talked about security in a wireless network, in particular about safeguards against certain common attacks. A wormhole attack places an attractive link in the network, allowing the attacker to snoop on many of the packets going through the network. Gray hole and black hole attacks expand on this concept by also consuming some or all of the incoming packets. This effectively creates a denial of service attack. Sybil attacks expand on wormholes by spoofing as multiple clients to obtain disproportionately many packets. Colluded attacks involve multiple clients working together to give their fake connections more reliability.

His proposal for network security involve a protocol which detects and uproots attackers. The protocol (separate from ETOP) stops attackers by first looking for suspicious traits for every client on the network. Questionable clients are then interrogated. Challenge packets are sent to the offending clients, which must be replied to in a certain way. Failure rates incompatible with their advertised reliability would expel those clients from the network. This protocol has a very high success rate and low false positive rate, but makes the network significantly less efficient.

"Survivable Routing in Multi-hop Wireless Networks"

Today we attended a talk given be Prof. Krishnamurthy on "Survivable Routing in Multi-hop Wireless Networks." We are starting to see multi-hop networks due to the interest from industry leading companies and the ability to obtain cheap hardware in order to implement the network. These networks consists of devices such as, laptops, handhelds, and smartphones, and routers that act as access points to provide interconnection between these terminal devices. These networks are starting to gain popularity because they use cheap hardware, the are easily and quickly expandable, and they are manageable and reliable. These networks are good networks for implementation in homes, campuses, hotels, as well as public transportation systems such as surveillance cameras, and especially military communication. There many experimental networks in use at major universities and cities such as MIT, Rutgers, Houston, and Singapore.

The talk then moved to talking about the routing in these multi-hop wireless networks. Now while these networks have their benefits there are also some issues that have to be ironed out. Theses issues include security, use of shortest path routing, and the lack of dealing with lower level functionalities such as modulation rate. Now while shortest path routing is good for wired communication, in wireless communication it leads to long links of poor quality and as a result poor performance. There is no way ideal way to establish link quality but packet delivery ratio is a reasonable tool but is still not quite the best. That is where the Expected Transmission Count (ETX) comes in. In ETX, probes are sent to every router and every router recieves these probes. Then the ETX value of a node can be computed by the equation 1/pi where pi is the ration of total probes received to total probes sent, or the probability that a probe will reach the node. Then the route can be determined by selecting those routers that have the smallest ETX values. This was further extended with Expected Transmission Times (ETT) which takes account for multiple transmission rates by sending probes at multiple rates.

Next we discussed the idea that order matters in these routing algorithms and that both ETX and ETT do not deal with the position of these nodes. ETOP was designed to capture the three factors that affect the cost of a path: number of nodes, quality of links, and relative position of the node.The cost of an ETOP path can be determined by adding the expected number of transmissions with the number of retransmissions required to deliver the packet. ETOP has shown better performance over ETX, and its higher reliability allows TCP to more aggressively ramp up and as a result TCP throughput improves.

The final issue is the issue concerning security. These networks do deal with security issues such as wormholes or corrupted nodes that lead to data compromisation. An attacker can perform attacks by reporting false pf values, colluding with another attacker which can all cause routes to route through adversaries. Attackers can send more probes in order to indicate a higher quality in order to attract routes. In order to deal with these potential attackers, nodes should detect if probes are being received at a higher frequency than expected or by placing random nonce in a probe and when computing link qualities the random nonces must be reported as well, in an attempt to stop adversaries from taking control of routes.