Wednesday, April 28, 2010

Overview Session

We will have an overview session on Thursday, May 6th at 12pm.
The session is mainly to help review material and attendance is optional.

Friday, April 16, 2010

“Survivable Routing in Multi-hop Wireless Networks” by Dr. Srikanth Krishnamurthy

Today we listened to Dr. Krishnamurthy give a talk on “Survivable routing in multi-hop wireless networks.” Multi-hop wireless networks have more than one node along its path to send a packet wirelessly from one point to another. This is much more cost effective than running cable because it consists of cheap hardware, it is quickly and easily extendable, it is easily manageable and reliable, and it is built rapidly. Many test beds are in use currently at MIT, U of Utah, UCR, and Georgia Tech. There are also public deployments being run in such places as Singapore.
Even though there is much research and work done with wireless multi-hop networks there are still many issues remaining. The issues are routing quality, reliability and security.
Dr. Krishnamurthy talked about ETX(Expected transmission count) routing metric. This is used to measure the quality of a path between two nodes in a wireless packet network. Facts that need to be considered when looking at this are order matters and the security. This metric does not take into account switching link positions would equals different costs. And it also doesn’t account for a finite number of retransmissions where a packet is dropped changes costs. These all degrade the reliability and quality of the transmission. ETX was designed to improve transmission but it does not cover security.
Dr. Krishnamurthy introduced ETOP. This takes into account all the issues that ETX does not cover; node order, dropped packets and security. The estimated cost of n-hop path is expected to be the number of transmissions plus the retransmissions required to deliver a packet over that particular path. Performance results give a 65% improvement over ETX routing for paths that are separated by 3 or more hops. TCP behavior with ETOP gives higher reliability with ETOP allowing TCP to be more aggressive and ramp up its congestion window. TCP transmission time improves.
But there are security issues that need to be address with ETOP. Dr. Krishnamurthy addresses the issues of vulnerable attacks on the system paths. Some solutions would be when sending out probes that each carry a message. Then the reply to the probes would not only be the probe number but the message value. Another is to respond on only certain channels in the system. These solutions would throw off attackers who were trying to fake link quality metrics to attack routes.
All in all Dr. Krishnamurthy feels that ETOP is a better, more reliable and secure than ETX.

Wednesday, April 14, 2010

Colloquium: "Survivable Routing in Multi-hop Wireless Networks"

Today's class was mostly on Prof. Krishnamurthy's talk on routing within a multi-hop wireless network. Multi-hop wireless networks are networks involving multiple static routers connecting wirelessly over some area, and are commonly seen in things like city-wide wireless networks, campus networks, surveillance, and the military. Research on this kind of networks is being carried out in many places, involving Rutgers, MIT, and UCR. What separates these networks from wired networks is the fact that spatial distance and arrangement matter.

Currently, the most popular protocol uses ETX (Expected Transmission Count) as a metric of connection quality. ETX uses number of expected number of packets to send one good packet as weight on a network graph connection. However, most protocols put too much weight on number of hops needed to a destination, and so favors longer, less reliable hops over short, very reliable hops between routers. This forces each router to send more packets, expecting them to not arrive successfully. In addition, ETX is blind to where unreliable connections are located – lost packets toward the end of a path means that a failure message will have to travel all the way back to the sender, causing more network congestion.

Prof. Krishnamurthy proposes ETOP, a protocol that takes into account both the number of hops, reliability, and the location of unreliable connections. The metric uses probe packets to determine where unreliable connections are and places greater cost toward unreliable segments which are farther in the path. Because of this, the protocol generates non-commutative paths (Paths from routers A to B aren't necessarily the same as B to A). Regardless, he has proven that greedy algorithms are usable for determining paths given this metric, and so Dijkstra's algorithm is usable for the protocol.

ETOP on average shows better goodput (useful bandwidth) compared to ETX, especially for multi-hop paths. The protocol interacts somewhat chaotically with TCP, making its congestion window fluctuate wildly, but almost always better goodput nonetheless. ETOP also shows worse round trip time since it may favor paths with more hops.

Prof. Krishnamurthy also talked about security in a wireless network, in particular about safeguards against certain common attacks. A wormhole attack places an attractive link in the network, allowing the attacker to snoop on many of the packets going through the network. Gray hole and black hole attacks expand on this concept by also consuming some or all of the incoming packets. This effectively creates a denial of service attack. Sybil attacks expand on wormholes by spoofing as multiple clients to obtain disproportionately many packets. Colluded attacks involve multiple clients working together to give their fake connections more reliability.

His proposal for network security involve a protocol which detects and uproots attackers. The protocol (separate from ETOP) stops attackers by first looking for suspicious traits for every client on the network. Questionable clients are then interrogated. Challenge packets are sent to the offending clients, which must be replied to in a certain way. Failure rates incompatible with their advertised reliability would expel those clients from the network. This protocol has a very high success rate and low false positive rate, but makes the network significantly less efficient.

"Survivable Routing in Multi-hop Wireless Networks"

Today we attended a talk given be Prof. Krishnamurthy on "Survivable Routing in Multi-hop Wireless Networks." We are starting to see multi-hop networks due to the interest from industry leading companies and the ability to obtain cheap hardware in order to implement the network. These networks consists of devices such as, laptops, handhelds, and smartphones, and routers that act as access points to provide interconnection between these terminal devices. These networks are starting to gain popularity because they use cheap hardware, the are easily and quickly expandable, and they are manageable and reliable. These networks are good networks for implementation in homes, campuses, hotels, as well as public transportation systems such as surveillance cameras, and especially military communication. There many experimental networks in use at major universities and cities such as MIT, Rutgers, Houston, and Singapore.

The talk then moved to talking about the routing in these multi-hop wireless networks. Now while these networks have their benefits there are also some issues that have to be ironed out. Theses issues include security, use of shortest path routing, and the lack of dealing with lower level functionalities such as modulation rate. Now while shortest path routing is good for wired communication, in wireless communication it leads to long links of poor quality and as a result poor performance. There is no way ideal way to establish link quality but packet delivery ratio is a reasonable tool but is still not quite the best. That is where the Expected Transmission Count (ETX) comes in. In ETX, probes are sent to every router and every router recieves these probes. Then the ETX value of a node can be computed by the equation 1/pi where pi is the ration of total probes received to total probes sent, or the probability that a probe will reach the node. Then the route can be determined by selecting those routers that have the smallest ETX values. This was further extended with Expected Transmission Times (ETT) which takes account for multiple transmission rates by sending probes at multiple rates.

Next we discussed the idea that order matters in these routing algorithms and that both ETX and ETT do not deal with the position of these nodes. ETOP was designed to capture the three factors that affect the cost of a path: number of nodes, quality of links, and relative position of the node.The cost of an ETOP path can be determined by adding the expected number of transmissions with the number of retransmissions required to deliver the packet. ETOP has shown better performance over ETX, and its higher reliability allows TCP to more aggressively ramp up and as a result TCP throughput improves.

The final issue is the issue concerning security. These networks do deal with security issues such as wormholes or corrupted nodes that lead to data compromisation. An attacker can perform attacks by reporting false pf values, colluding with another attacker which can all cause routes to route through adversaries. Attackers can send more probes in order to indicate a higher quality in order to attract routes. In order to deal with these potential attackers, nodes should detect if probes are being received at a higher frequency than expected or by placing random nonce in a probe and when computing link qualities the random nonces must be reported as well, in an attempt to stop adversaries from taking control of routes.

Lab 4: CGI Search Engine

Initial lab on SNMP has been canceled due to technical issues.
Pre-lab questions will become part of the 2nd Homework assignment.

Instead I have posted a new lab on developing a CGI Search Engine which is due Wednesday, Apr 28 at 1:00 pm.

Also, there will be bonus for the students that had scheduled as first in the lab including previous ones.

Tuesday, April 13, 2010

Lecture 20: HTTP (April 12)

The lecture started with a presentation by Jeffrey about Network Neutrality. Network Neutrality is a heated topic about whether or not to keep the internet as a equal service to all. He touched on various aspects of this topic; both for and against it. Then for the remainder of the lecture we covered the topic of Web Servers and CGI.

Web Servers communicate in using HTTP. Web Servers listen for incoming connections from various client programs on well known ports. Once a connection with the client is made, it gets and sends data to the clients. Web Servers can provide Dynamic Documents which can be things like custom ads, database access, shopping carts and so on.

Web servers can be general or Custom. If a server is custom, then is will need a method of mapping http request to service request and send back data and handle errors, but with the draw backs of needing dedicated ports and duplicated code such as basic TCP server, HTTP parsing, error handling , headers and access control.
The alternative is the Smart Web server. This takes a general purpose web server and processes documents as it sends it to clients. Server Side Includes (SSI) provides Smart Servers with commands embeded in HTML comments to make servers more efficient. These commands are known as SSI Directives. Servers also utilize lots of scripting languages such as ASP, PerlScript, JavaScript, and others.

Common Gateway Interfaces (CGI) were then discussed. CGI is a standard mechanism to associate URL's with server executable programs, a protocol to sort how requests are passes to external programs and how responses are sent to the client. The server in this case acts as an HTTP front to a CGI process that communicates with the client and the CGI program.

CGI will be completed in the next lecture.

Thursday, April 8, 2010

Lecture 19: HTTP (April 7)

We began today with a short presentation from Justin about IPv6 and its differences from IPv4. We then began a discussion about the HTTP protocol. HTTP is the protocol that supports communication between browsers and servers. HTTP is implemented so that a clients requests something from the server and the server replies to that request. HTTP connections of this kind are stateless, and the connection can either be persistent or not.

The format of an HTTP request is in the form of lines of ASCII text followed by a carriage return and line feed. In HTTP version 1.0 and above, the version number is required as one of the lines in the request, while in previous versions this was not required.

HTTP methods that are supported across the board are GET, POST, and HEAD. Servers that use HTTP 1.1 will support PUT, DELETE, TRACE, and OPTIONS.

We then concluded the class with a discussion of what cookies are and what they are used for. Cookies are used in order to allow web sites to keep state information about you. This is implemented as a cookie value that is sent in the HTTP header as a field, and then the server will compare that value with its database of cookies. This way, state information is saved.

Tuesday, April 6, 2010

Lecture 17: Socket Programming Issues

Today's lecture started as all lectures do, with a review of the previous topic, 'TCP/UDP Sockets'. Dr. Gunes began by reiterating the specific code used to create a TCP socket. He also discussed other specific code and functions that are used to access data coming and going to and from the transport layer. Included functions were: listen(), accept(), and close(). These functions are system calls because they interface with hardware inaccessible at the application layer. After re-explaining the TCP calls, the professor progressed to TCP's sloppy brother, UDP. He mentioned several more system calls and some errors.

After the lecture review was complete, the professor transitioned to the new information. The main idea of this lecture was to take the system calls and low level interface from the previous lecture and understand how to use these tools to create functional applications using the protocols.

The first application the instructor discussed was a generic TCP client. When designing such an application many things have to be taken into account. If blocking IO is used, the listening aspect of the program must be in a separate thread. If the IO is non-blocking, then it must be polled continuously.

Another option is to use alarms and interrupts. Or, the "select()" function could be used. The instructor then spent several slides describing how to use the select function.

After discussing these programming decisions, specific errors were discussed. We learned about error codes and how to read their descriptions using the strerror() function.

Following the errors, general programming strategies for both clients and servers were discussed. When designing a client several things must be attended to, these include: identifying the server and port selection. When designing a server, the programmer must decide between concurrent client handling and iterative. Between these two choices are many differences that must be considered.

Monday, April 5, 2010

Lecture 16 TCP and UDP Sockets

This lecture covers the creation of sockets for TCP and UDP connections. To create a TCP socket one must call a function socket(PF_INET, SOCK_STREAM, 0) which returns a positive integer reference to the newly created socket or a negative return indicates the failure in creating a new socket. A newly created socket can be bound to specific port and address by using the bind(int sockfd, const struct sockaddr *myaddr, socklen_t addrlen) or a user can let the operating system take care of it.

A server that is needs to accept incoming TCP connections needs to establish a passive mode TCP socket. This is done by using the function listen(int sockfd, int backlog) which takes a socket reference and an integer specify a total number of connections to queue for the server. The intricacies of establishing a TCP connection are handled by the operating system, all the server has to do is call accept(int sockfd, struct sockaddr* cliaddr, socklen_t *addrlen) with a predefined socket, and a location to hold the struct used to define the client's address. The accept function will return a socket reference in the form of a positive integer or a negative value indicating that an error has occurred. To operate on an active TCP socket, read and write functions may be used to send and receive data from and to the socket. To terminate a TCP connection, calling a close function will handle all the TCP details involved in the termination of a connection. For a client, to connect to a server, a connect function may be used to establish a TCP connection. The connect function must be supplied with an initialized socket reference along with information in regards to the address of the server to connect to. Upon a successful connect, the read and write functions may be used to send and receive data.

To create a UDP socket, socket(...) is called using the SOCK_DGRAM constant for the type parameter. Binding of an address is also available to UDP sockets and the process is the same as the one for TCP socket connections. Since UDP is connectionless, once a socket has been established, data can immediately be sent from it using sendto(...) which takes in, besides the usual socket descriptor and data, the destination address in the form of a structure. This function returns with a value indicating the number of bytes that the operating system accepted to be sent. Since UDP is unreliable, there is no indicator of how much of the sent data actually made it to the destination. To receive data a recvfrom(...) function may be called with the address of the sender specified along with buffer space and a socket descriptor. Since UDP is unreliable and recvfrom(...) is a blocking call, several countermeasures must be taken to avoid waiting for lost packets. One of which is to using a SIGALRM which basically throws an error for the recvfrom function when a specified amount of time is used.

An interesting feature available in the API is a "connected mode" for UDP which basically allows for the use of sendto() without specifying destination address as well as write(), send(), read(), and recv() commands for that socket descriptor. This however is only a convenience, it does not establish an actual connection with teh peer.