Saturday, May 8, 2010

Lecture #23: Cryptography

Cryptography is necessary when transmitting confidential information across a network. It helps prevent the compramize of data from attacks such as eavesdropping, active insertion, impersonnation, etc. There are two main types of cryptography used today: symmetric key cryptography, and public-key cryptography.

Symmetric key crytography uses ciphers to encrypt/decrypt messages. It requires both parties to use a common key. Therefore, the issue arrises of how to share the common key between the parties.

Public key encryption uses two keys: a public key which is publically available, and a private key, which is used to decrypt messages encoded using the public key. It is nearly impossible to determine the private key for any given public key. Because public key encryption is more computationally expensive, two parties will often exchange a symmetric key using public key encryption, and then continue to use symmetric key cryptography for the remainder of the secure session.

Wednesday, May 5, 2010

Lecture 24: Secure Communications

The first thing we talked about in this lecture was message integrity. We need to be able to verify that a message, once received, is in its original form and has been received unmodified. This task is accomplished by generating a message digest for the message at its origin before it is sent and generating another one once it has been received and then compare the two digests. Two such algorithms were mentioned: MD5 which has a 128-bit message digest for output and SHA-1 which has a 160-bit message digest output. When then discussed MAC or message authentication codes which is basically a hash of the message being sent that is keyed by a value that is shared between the sender and the receiver, this provides a guarantee for who created the message but not necessarily who sent it. This problem can be solved by using nonce values as part of the key used in the hash to prevent playback attacks.
Later we discussed digital signatures, which is another way to verify the origin of a message. For short messages it is sufficient to encrypt a message using one's private key before sending it to a recipient who can then verify the sender by decrypting it using the public key. However, for large messages this is not efficient, so instead of signing the message itself, one can generate a message digest for the message and sign the digest instead. The recipient can then verify a message by decrypting the signed hash value as well as generating a message digest for the received message and check if they match.
Then we went over public key certification and authorities. Basically, without them there is no way to know if a provided public key belongs to the person that provided it. What is done now, is public keys are submitted to a certification authority who verifies the identity of the person that submitted the key, then signs it using their own private key, which is the certificate. So when someone wants someone else's public key, they can get a certificate from either that person or another source, decrypt it using the CA's public key, and they will know that the key they have is the legitimate public key of the other person.
The second to last topic that was covered was SSL, which is a security protocol that is implemented above the transport layer and provides confidentiality, integrity, and authentication. SSL supports many encryption algorithms, meaning that a client and server must decide on which one to use, this is done by a client sending a server a list of acceptable cipher suites and the server will then pick a one and tell the client which one it chose. SSL communication starts with a handshake during which the identity of the server is authenticated via public key certificate, nonces are exchanged along with cipher suite choice, as well as MACs of all messages passed during handshaking. The encryption and MAC keys are computed using the information that is exchange during the handshake, the MACs of the handshake messages are used to verify that no one has modified the initial handshake messages that were not encrypted. To detect the tampering of data during communication, all MACs are computing using a sequence number, MAC key, message type, and the data being carried in the packet.
The last thing we discussed was IPsec which is implemented above the network layer and provides for data integrity, origin authentication, replay attack prevention, and confidentiality. IPsec has two modes of operation, the first is Transport mode in which the end systems take care of IPsec, the other is tunneling mode in which IPsec is handed by the routers (first hop routers from each of the end systems). There are then two different protocols that can be used in IPsec, the first is Authentication Header which provides for the authentication of the source as well as the integrity of the data being transmitted, however, there is no confidentiality. The second is Encapsulation Security Protocol which provides all the features that AH does but adds confidentiality. The most popular service used is Tunneling with ESP. When using Tunneling with ESP the original IP datagram gets encrypted using an agreed upon algorithm, a sequence number and an SPI (Security Parameter Index, basically an index for a lookup table for a router to figure out how to process the packet) are appended before the encrypted packet. Then the whole datagram gets run through a MAC algorithm, whose results are then appended to the message, and finally a new IP header is attached to the beginning of the new message before it gets sent. A receiving router will then have to use the SPI in the packet to figure out how to handle the payload, verify its integrity, then send the original packet to its intended destination.

Sunday, May 2, 2010

Lecture 22: Network Security

In this lecture, professor Gunes talked about Network Security.

There is no security in internet because of its initial design. Early security flaws involved phone phreaking, where when you whistle a correct tone into the phone, you could reset the trunk lines. Robert Morris created a worm in 1988 that infected computers to see how many computers that were on the internet. Due to poor coding, he brought down around 6,000 computers on the internet. Kevin Mitnick was the first hacker on FBI's Most Wanted list. He stole many credit cards and served time for his crimes. He now a security consultant.

Some worms in history, including the Sapphire Worm, was the fast computer worm in history. Infect more than 90 percent of vulnerable hosts within 10 minutes. Back in the day, patches and system updates were more "optional", there were no automatic updates. This lead to huge problems, because people that weren't computer people, wouldn't update their software. DoS attacks involve something sending out bogus requests to overload a system.

The number of desktops grew exponentially in the 80s, but there were still no emphasis on security. It wasn't initially designed for commercial purposes. It was designed for a group of people with mutually trusting users.

For parties of different services (the provider, the user, the attacker), they all have different concerns about what they would like to protect. In the bank example in the slides, the bank service provider wants to protect their money, where users should not be able to change the amount of money whenever they want. The good guys have to think like the bad guys to protect what they are planning to do.

The basic security services that are essential in network communication are: authentication, authorization, availability, confidentially, integrity, and non-repudiation. Different types of security attacks are passive and active attacks, Passive attacks include message eaves dropping and monitoring transmissions, while active attacks include masquerade, replay, modification of message contents, and denial of service attacks, the general modification of the data stream.

Wednesday, April 28, 2010

Overview Session

We will have an overview session on Thursday, May 6th at 12pm.
The session is mainly to help review material and attendance is optional.

Friday, April 16, 2010

“Survivable Routing in Multi-hop Wireless Networks” by Dr. Srikanth Krishnamurthy

Today we listened to Dr. Krishnamurthy give a talk on “Survivable routing in multi-hop wireless networks.” Multi-hop wireless networks have more than one node along its path to send a packet wirelessly from one point to another. This is much more cost effective than running cable because it consists of cheap hardware, it is quickly and easily extendable, it is easily manageable and reliable, and it is built rapidly. Many test beds are in use currently at MIT, U of Utah, UCR, and Georgia Tech. There are also public deployments being run in such places as Singapore.
Even though there is much research and work done with wireless multi-hop networks there are still many issues remaining. The issues are routing quality, reliability and security.
Dr. Krishnamurthy talked about ETX(Expected transmission count) routing metric. This is used to measure the quality of a path between two nodes in a wireless packet network. Facts that need to be considered when looking at this are order matters and the security. This metric does not take into account switching link positions would equals different costs. And it also doesn’t account for a finite number of retransmissions where a packet is dropped changes costs. These all degrade the reliability and quality of the transmission. ETX was designed to improve transmission but it does not cover security.
Dr. Krishnamurthy introduced ETOP. This takes into account all the issues that ETX does not cover; node order, dropped packets and security. The estimated cost of n-hop path is expected to be the number of transmissions plus the retransmissions required to deliver a packet over that particular path. Performance results give a 65% improvement over ETX routing for paths that are separated by 3 or more hops. TCP behavior with ETOP gives higher reliability with ETOP allowing TCP to be more aggressive and ramp up its congestion window. TCP transmission time improves.
But there are security issues that need to be address with ETOP. Dr. Krishnamurthy addresses the issues of vulnerable attacks on the system paths. Some solutions would be when sending out probes that each carry a message. Then the reply to the probes would not only be the probe number but the message value. Another is to respond on only certain channels in the system. These solutions would throw off attackers who were trying to fake link quality metrics to attack routes.
All in all Dr. Krishnamurthy feels that ETOP is a better, more reliable and secure than ETX.

Wednesday, April 14, 2010

Colloquium: "Survivable Routing in Multi-hop Wireless Networks"

Today's class was mostly on Prof. Krishnamurthy's talk on routing within a multi-hop wireless network. Multi-hop wireless networks are networks involving multiple static routers connecting wirelessly over some area, and are commonly seen in things like city-wide wireless networks, campus networks, surveillance, and the military. Research on this kind of networks is being carried out in many places, involving Rutgers, MIT, and UCR. What separates these networks from wired networks is the fact that spatial distance and arrangement matter.

Currently, the most popular protocol uses ETX (Expected Transmission Count) as a metric of connection quality. ETX uses number of expected number of packets to send one good packet as weight on a network graph connection. However, most protocols put too much weight on number of hops needed to a destination, and so favors longer, less reliable hops over short, very reliable hops between routers. This forces each router to send more packets, expecting them to not arrive successfully. In addition, ETX is blind to where unreliable connections are located – lost packets toward the end of a path means that a failure message will have to travel all the way back to the sender, causing more network congestion.

Prof. Krishnamurthy proposes ETOP, a protocol that takes into account both the number of hops, reliability, and the location of unreliable connections. The metric uses probe packets to determine where unreliable connections are and places greater cost toward unreliable segments which are farther in the path. Because of this, the protocol generates non-commutative paths (Paths from routers A to B aren't necessarily the same as B to A). Regardless, he has proven that greedy algorithms are usable for determining paths given this metric, and so Dijkstra's algorithm is usable for the protocol.

ETOP on average shows better goodput (useful bandwidth) compared to ETX, especially for multi-hop paths. The protocol interacts somewhat chaotically with TCP, making its congestion window fluctuate wildly, but almost always better goodput nonetheless. ETOP also shows worse round trip time since it may favor paths with more hops.

Prof. Krishnamurthy also talked about security in a wireless network, in particular about safeguards against certain common attacks. A wormhole attack places an attractive link in the network, allowing the attacker to snoop on many of the packets going through the network. Gray hole and black hole attacks expand on this concept by also consuming some or all of the incoming packets. This effectively creates a denial of service attack. Sybil attacks expand on wormholes by spoofing as multiple clients to obtain disproportionately many packets. Colluded attacks involve multiple clients working together to give their fake connections more reliability.

His proposal for network security involve a protocol which detects and uproots attackers. The protocol (separate from ETOP) stops attackers by first looking for suspicious traits for every client on the network. Questionable clients are then interrogated. Challenge packets are sent to the offending clients, which must be replied to in a certain way. Failure rates incompatible with their advertised reliability would expel those clients from the network. This protocol has a very high success rate and low false positive rate, but makes the network significantly less efficient.

"Survivable Routing in Multi-hop Wireless Networks"

Today we attended a talk given be Prof. Krishnamurthy on "Survivable Routing in Multi-hop Wireless Networks." We are starting to see multi-hop networks due to the interest from industry leading companies and the ability to obtain cheap hardware in order to implement the network. These networks consists of devices such as, laptops, handhelds, and smartphones, and routers that act as access points to provide interconnection between these terminal devices. These networks are starting to gain popularity because they use cheap hardware, the are easily and quickly expandable, and they are manageable and reliable. These networks are good networks for implementation in homes, campuses, hotels, as well as public transportation systems such as surveillance cameras, and especially military communication. There many experimental networks in use at major universities and cities such as MIT, Rutgers, Houston, and Singapore.

The talk then moved to talking about the routing in these multi-hop wireless networks. Now while these networks have their benefits there are also some issues that have to be ironed out. Theses issues include security, use of shortest path routing, and the lack of dealing with lower level functionalities such as modulation rate. Now while shortest path routing is good for wired communication, in wireless communication it leads to long links of poor quality and as a result poor performance. There is no way ideal way to establish link quality but packet delivery ratio is a reasonable tool but is still not quite the best. That is where the Expected Transmission Count (ETX) comes in. In ETX, probes are sent to every router and every router recieves these probes. Then the ETX value of a node can be computed by the equation 1/pi where pi is the ration of total probes received to total probes sent, or the probability that a probe will reach the node. Then the route can be determined by selecting those routers that have the smallest ETX values. This was further extended with Expected Transmission Times (ETT) which takes account for multiple transmission rates by sending probes at multiple rates.

Next we discussed the idea that order matters in these routing algorithms and that both ETX and ETT do not deal with the position of these nodes. ETOP was designed to capture the three factors that affect the cost of a path: number of nodes, quality of links, and relative position of the node.The cost of an ETOP path can be determined by adding the expected number of transmissions with the number of retransmissions required to deliver the packet. ETOP has shown better performance over ETX, and its higher reliability allows TCP to more aggressively ramp up and as a result TCP throughput improves.

The final issue is the issue concerning security. These networks do deal with security issues such as wormholes or corrupted nodes that lead to data compromisation. An attacker can perform attacks by reporting false pf values, colluding with another attacker which can all cause routes to route through adversaries. Attackers can send more probes in order to indicate a higher quality in order to attract routes. In order to deal with these potential attackers, nodes should detect if probes are being received at a higher frequency than expected or by placing random nonce in a probe and when computing link qualities the random nonces must be reported as well, in an attempt to stop adversaries from taking control of routes.

Lab 4: CGI Search Engine

Initial lab on SNMP has been canceled due to technical issues.
Pre-lab questions will become part of the 2nd Homework assignment.

Instead I have posted a new lab on developing a CGI Search Engine which is due Wednesday, Apr 28 at 1:00 pm.

Also, there will be bonus for the students that had scheduled as first in the lab including previous ones.

Tuesday, April 13, 2010

Lecture 20: HTTP (April 12)

The lecture started with a presentation by Jeffrey about Network Neutrality. Network Neutrality is a heated topic about whether or not to keep the internet as a equal service to all. He touched on various aspects of this topic; both for and against it. Then for the remainder of the lecture we covered the topic of Web Servers and CGI.

Web Servers communicate in using HTTP. Web Servers listen for incoming connections from various client programs on well known ports. Once a connection with the client is made, it gets and sends data to the clients. Web Servers can provide Dynamic Documents which can be things like custom ads, database access, shopping carts and so on.

Web servers can be general or Custom. If a server is custom, then is will need a method of mapping http request to service request and send back data and handle errors, but with the draw backs of needing dedicated ports and duplicated code such as basic TCP server, HTTP parsing, error handling , headers and access control.
The alternative is the Smart Web server. This takes a general purpose web server and processes documents as it sends it to clients. Server Side Includes (SSI) provides Smart Servers with commands embeded in HTML comments to make servers more efficient. These commands are known as SSI Directives. Servers also utilize lots of scripting languages such as ASP, PerlScript, JavaScript, and others.

Common Gateway Interfaces (CGI) were then discussed. CGI is a standard mechanism to associate URL's with server executable programs, a protocol to sort how requests are passes to external programs and how responses are sent to the client. The server in this case acts as an HTTP front to a CGI process that communicates with the client and the CGI program.

CGI will be completed in the next lecture.

Thursday, April 8, 2010

Lecture 19: HTTP (April 7)

We began today with a short presentation from Justin about IPv6 and its differences from IPv4. We then began a discussion about the HTTP protocol. HTTP is the protocol that supports communication between browsers and servers. HTTP is implemented so that a clients requests something from the server and the server replies to that request. HTTP connections of this kind are stateless, and the connection can either be persistent or not.

The format of an HTTP request is in the form of lines of ASCII text followed by a carriage return and line feed. In HTTP version 1.0 and above, the version number is required as one of the lines in the request, while in previous versions this was not required.

HTTP methods that are supported across the board are GET, POST, and HEAD. Servers that use HTTP 1.1 will support PUT, DELETE, TRACE, and OPTIONS.

We then concluded the class with a discussion of what cookies are and what they are used for. Cookies are used in order to allow web sites to keep state information about you. This is implemented as a cookie value that is sent in the HTTP header as a field, and then the server will compare that value with its database of cookies. This way, state information is saved.

Tuesday, April 6, 2010

Lecture 17: Socket Programming Issues

Today's lecture started as all lectures do, with a review of the previous topic, 'TCP/UDP Sockets'. Dr. Gunes began by reiterating the specific code used to create a TCP socket. He also discussed other specific code and functions that are used to access data coming and going to and from the transport layer. Included functions were: listen(), accept(), and close(). These functions are system calls because they interface with hardware inaccessible at the application layer. After re-explaining the TCP calls, the professor progressed to TCP's sloppy brother, UDP. He mentioned several more system calls and some errors.

After the lecture review was complete, the professor transitioned to the new information. The main idea of this lecture was to take the system calls and low level interface from the previous lecture and understand how to use these tools to create functional applications using the protocols.

The first application the instructor discussed was a generic TCP client. When designing such an application many things have to be taken into account. If blocking IO is used, the listening aspect of the program must be in a separate thread. If the IO is non-blocking, then it must be polled continuously.

Another option is to use alarms and interrupts. Or, the "select()" function could be used. The instructor then spent several slides describing how to use the select function.

After discussing these programming decisions, specific errors were discussed. We learned about error codes and how to read their descriptions using the strerror() function.

Following the errors, general programming strategies for both clients and servers were discussed. When designing a client several things must be attended to, these include: identifying the server and port selection. When designing a server, the programmer must decide between concurrent client handling and iterative. Between these two choices are many differences that must be considered.

Monday, April 5, 2010

Lecture 16 TCP and UDP Sockets

This lecture covers the creation of sockets for TCP and UDP connections. To create a TCP socket one must call a function socket(PF_INET, SOCK_STREAM, 0) which returns a positive integer reference to the newly created socket or a negative return indicates the failure in creating a new socket. A newly created socket can be bound to specific port and address by using the bind(int sockfd, const struct sockaddr *myaddr, socklen_t addrlen) or a user can let the operating system take care of it.

A server that is needs to accept incoming TCP connections needs to establish a passive mode TCP socket. This is done by using the function listen(int sockfd, int backlog) which takes a socket reference and an integer specify a total number of connections to queue for the server. The intricacies of establishing a TCP connection are handled by the operating system, all the server has to do is call accept(int sockfd, struct sockaddr* cliaddr, socklen_t *addrlen) with a predefined socket, and a location to hold the struct used to define the client's address. The accept function will return a socket reference in the form of a positive integer or a negative value indicating that an error has occurred. To operate on an active TCP socket, read and write functions may be used to send and receive data from and to the socket. To terminate a TCP connection, calling a close function will handle all the TCP details involved in the termination of a connection. For a client, to connect to a server, a connect function may be used to establish a TCP connection. The connect function must be supplied with an initialized socket reference along with information in regards to the address of the server to connect to. Upon a successful connect, the read and write functions may be used to send and receive data.

To create a UDP socket, socket(...) is called using the SOCK_DGRAM constant for the type parameter. Binding of an address is also available to UDP sockets and the process is the same as the one for TCP socket connections. Since UDP is connectionless, once a socket has been established, data can immediately be sent from it using sendto(...) which takes in, besides the usual socket descriptor and data, the destination address in the form of a structure. This function returns with a value indicating the number of bytes that the operating system accepted to be sent. Since UDP is unreliable, there is no indicator of how much of the sent data actually made it to the destination. To receive data a recvfrom(...) function may be called with the address of the sender specified along with buffer space and a socket descriptor. Since UDP is unreliable and recvfrom(...) is a blocking call, several countermeasures must be taken to avoid waiting for lost packets. One of which is to using a SIGALRM which basically throws an error for the recvfrom function when a specified amount of time is used.

An interesting feature available in the API is a "connected mode" for UDP which basically allows for the use of sendto() without specifying destination address as well as write(), send(), read(), and recv() commands for that socket descriptor. This however is only a convenience, it does not establish an actual connection with teh peer.

Monday, March 29, 2010

Lecture 13: FTP

This lecture was about the File Transfer Protocol (FTP). The main purpose of FTP is to transfer files from one computer to another across a network. FTP was implemented to help promote the sharing of files, encourage use of remote computers, and transfer data reliably between different end hosts. You might think at first that transferring files between computers is relatively easy. However, different operating systems have different structures for handling files and could be drastically different from one another. FTP is responsible for handling all these issues to make the transfer seamless for the users.

When an FTP connection is made, two separate connections are established. One is for control information and the other is for data information. First, the control connection is established and the end hosts communicate with one another. When a request is made over the control connection for a file, the data connection is established and the data transfer occurs. As data is transferred over the connections, replies are also sent back and forth. Replies are only sent over the control connection and contain much information about the state of the transfer.

Trivial File Transfer Protocol (TFTP) is a simplified version of FTP. It was designed to be small and simple and able to fit into the ROM of a computer. TFTP is mainly used for bootstrapping diskless systems.

Thursday, March 25, 2010

Lecture 15: Socket Programming

The lecture for Wednesday, March 25, focused itself primarily on Socket Programming. Sockets are and Application Programming Interface (API) for use with TCP/IP. The network API are services that provide an interface between the application and the protocol software. When we look at a network API, we want to have certain features. These features include, a Generic Programming interface which supports multiple communication protocol suits. Other features include support for both message oriented and connection oriented communication, the ability to work with existing I/O services, and be independent of the OS. There are also certain functions needed, such as specify local and remote connection endpoints, initiate the connection, wait for incoming connections, send and recieve data, terminate a connection, and handle errors.

A socket is an abstract representation of a communication endpoint that needs to establish a connection and specify communication endpoint addresses. In order to create a socket we must call a function, int socket(int family, int type, int proto); where family specifies the protocol familty, type specifies the type of service and protocol specifies the specific protocol. This function will return a socket descriptor or -1 on an error, and will allocate the resources needed for a communication endpoint, but endpoint addressing has yet to be dealt with. Generic socket addresses are added through a struct, that contains three values, a address length, family, and the address value. This addressing can be done bothe in IPv4 and IPv6.

Once the addressing has been handled it is time to bind the address to a socket. To do this we use the bind() system call. If the bind returns successfully it will return a 0 or a -1 on error. Calling bind() assigns the address specified by the structure to the socket descriptor. There are a number of uses for bind, such as allowing a client to bind to a port, or a server to a well known address.

At the end of the lecture we went over some more socket system calls, such as the general use ones of read(), write(), or close(). There are also specific calls for connection oriented(TCP), connect(), listen(), and accept(). As well as calls for connectionless service(UDP), send(), and recv().

Monday, March 22, 2010

Lab 3: Transport Layer Protocols

You may post your questions and comments regarding the Transport Layer Protocols lab under this blog post.

You may schedule lab sessions to work in the networking lab using the link posted on WebCT.

Do not forget to turn in the solutions for Question Sheet for the PreLab in Pages 173-174 by Thursday, March 25th by 1pm on WebCT.

Tuesday, March 9, 2010

Lab 2 deadline

Deadline for Routing Information Protocol has been extended to Friday 11:00 pm

Saturday, March 6, 2010

Lecture 12: Telnet, Email, etc.



The lecture given on wednesday, march 3rd, started, as always with a review of the previous lecture and then moved onto the new material: the application level protocols telnet, smtp, pop, imap and mime. From the previous lecture, Dr. Gunes repeated the key functions of a router, how much buffer was necessary, bridges, and spanning trees.
The new information was, in my opinion, more enjoyable. The reason I say this is obvious, application level protocols are easier to understand and experiement with. Almost all of the protocols mentioned in the class on Wednesday were not only explained, but demonstrated.

The first protocol that Dr. Gunes taught us about was telnet. Telnet is simple, bidirectional communication protocol that utilizes byte oriented communication. It is a generic TCP client that sends whatever you type into the host terminal over the TCP socket. Many unix machines have telnet servers offering functionality like echo running by default. The professor demonstrated several such examples during class to the delight of the students.

Next, the instructor educated us on several basic email protocols including smtp, pop, imap, and mime. Smtp, simple message transfer protocol is used to send emails. It can even be used through telnet! It was shown to be unsecure because it is very easy to send fraudulent emails. Next the instructor taught us about POP, post office protocol. This mail receiving protocol allows the client to read their emails by pulling them entirely from the server. this allows the emails to be read offline. After that, he taught us about IMAP, Internet Message Access protocol. This protocol is more flexible than pop3 and more complicated. Finally we discussecd webmail, which we are all quite familiar with.

Monday, March 1, 2010

Lecture 11: Router Architectures

Today's lecture covered the routers architecture. Explaining that there are two key router functions: routing and forwarding. Running routing algorithms to decide best path to reach its destination and fowarding data from the input link to the output link.
Input ports,output ports and switching fabric are important components of a router. Input ports perform the physical layer functions(line termination), the data link layer functions(data-link processing) and the network layer functions(lookup,fowarding and queuing). Import queuing takes place when data arrives faster than it can forward into switching fabic. Head of the Line blocking (HOL) queuing is when data at the front of the queue stops other data in the queue from moving forward. Switching fabric connects the routers input ports to its output ports. There are three types of switching fabrics: memory, bus and crossbar. And the output ports performs the reverse of the input port functions.
The lecture also covered bridges. A bridge connects networks. They are needed for two reasons; it strengthens signals if communication is large and provides autonomy.
The bridge must have a database to keep track of which hosts are on which networks. To do this the system administrator can hard code the addressses or the bridge could learn the information as it goes. A learning bridge can move from one network to another and hosts can be added at any time. This would allow no manual setup from humans. The problem with this is the possibility of looping in a system of two or more bridges. Some of the ways to fix this issue is to have the bridges detect loops and feed it back to the user or design the bridge to prune itself so there is no loops in the network.

Sunday, February 28, 2010

Lecture 9: Routing the Internet (Feb 22)

In today's lecture, Dr. Gunes talked about how routing in the internet works. In the entirety of the internet, it would be impractical to store all the destinations in to a routing table. The tables would be very large and inefficient. Network administrators do not want to be told how to configure their own routers, so each network may be a little different from other networks. They are all autonomous systems (AS). Each network much communicate with itself with the same intra-autonomous system routing protocol, but for other ASes to communicate with each other, there must be a Gateway router to bind them directly together.

The forwarding tables in each AS must have a intra-routing table as well as an inter-routing table to communicate with external destinations. An example from the lecture shows the tasks of the Inter-AS. When a datagram from the inside needs to be delivered to the outside, the AS must learn how the destination can be communicated to and where it can be reached through. Once it is known, the AS propagates the information to all of the routers in its network, so they are informed of how to reach that destination.

In Intra-AS routing, there are a few common routing protocols. The ones mentioned in the lecture were Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). RIP uses the distance vector algorithm mentioned in the last lecture and OSPF uses the link state algorithm mentioned previously as well. A major point of RIP is that constantly exchanges its distance vectors, also called advertisements, with its neighbors every thirty-seconds and the OSPF keeps a topological map at each of its nodes where its advertisements are flooded across the network.

Thursday, February 25, 2010

Lecture 10: BGP (Feb 24)

Today's lecture started off with the introduction of Lab 2 and the requirements for that lab. We then began a discussion about BGP and what it entails. BGP is essentially a protocol for subnets to announce "I am here." Routers will exchange routing information with each other in order to learn the reachability of other routers. BGP actually consists of two separate protocols, one that is internal to the asynchronous system and then one for inter-AS communications. We have these different protocols because we want the inter-AS routing to focus on policies while the intra-AS routing can focus on performance.

Wednesday, February 24, 2010

Lab 2 on RIP

You may post your questions and comments regarding the RIP lab under this blog post.

You may schedule lab sessions to work in the networking lab using the link posted on WebCT.

Do not forget to turn in the solutions for Question Sheet for the PreLab in Page 131 of Mastering Networks: An Internet Lab Manual by Monday, March 1st by 1pm on WebCT.

Clarification: You may find the solution to the first question in PreLab at http://www.techonia.com/configure-linux-pc-router.

Monday, February 22, 2010

Lecture 8: Routing Algorithms (Feb 17)

Today's lecture talks about routing algorithms, which are used by routers to determine where data packets should go in a network. This problem is essentially that of finding the shortest route through a weighted graph.

These algorithms may use global or local information. The routing itself can be relatively static or dynamic over time. Dijkstra's algorithm uses global information and constructs a shortest-path tree from a given router. For this strategy, every router remembers the complete topology of the network and paths toward each location. Bellman-Ford algorithm uses local information and broadcasted network changes to approximate and over time attain optimal routes. For this strategy, each router remembers their known closest distance to all locations in the network and the first hop toward that location.

In both algorithms, changes in the network may cause instabilities. Dijkstra's algorithm may have instabilities if packets change edge weights in a network graph. Bellman-Ford algorithm has troubles propagating information about increase in transmission costs or dead connections.

Wednesday, February 10, 2010

Lecture 7 : Routing (Feb 10)

Todays instruction was focused on how Routing works. It is important to understand the difference between the terms forwarding or routing. Forwarding is used when describing a packet at one hop, while routing is in reference to the whole path taken by a packet. There were two major types of connections discussed, Virtual Connections and Datagram Networks. Virtual Connections are technologies such as ATM, Frame Relay, and X2.5. These methods create a connection from end to end. This allows ATM to guarantee no loss, order, timing and rate of data transfer. Virtual Connections are efficient and practical in smaller network models, but fail in larger networks because resources are tied up in reserved connections. Datagram Networks, on the other hand, are much more efficient with large-scale networks. IP is the Datagram network protocol. There is not dedicated connection from one client to the other. The data is forward from one device to the next until it reaches the destination. IP cannot make any of the guarantees that ATM could, but it keeps the networks simple and fast. Next the lecture focused on the basics of how a forwarding table works. Forwarding tables have assigned IP address ranges that correspond to a routers interface locally. If an IP address falls within a certain range, the packet would be sent on that interface. A common method used in forwarding tables is longest prefix matching. This means that the forwarding table entry that matches the most digits in the IP address will correspond to the interface that the data will be sent on.

Monday, February 8, 2010

Student presentations

I have uploaded the link for class presentation schedule on WebCT under announcements.
Indicate your preferred date and topic on the spread sheet.

Lecture 6: TCP (Feb. 8)

Today's instruction was focused on how the TCP protocol communicates together between two systems. First, a connection needs to be made between the two parties. This is done via a syn, syn ack, ack three-way handshake. This is necessary to provide each party with the initial sequence number of the other and to verify a connection. After that, data can be transmitted from both sender and receiver. Each party keeps track of the sequence number, window size, and request number (ACK) of the other party to ensure that all the data is delivered reliably and securely. It's worthy to note that each party keeps two buffers. One is to hold the incoming data from the other party and the other is to hold the data sent by themselves in case they need to send it again. To terminate a connection, you can either send a RST segment which abruptly ends the session or you can do it the nice way. The nice involves one party sending a FIN segment which is following by an ACK from the other party. The connection is still alive however because the other party might still have information to send. Once the other party is finished, they send a FIN segment which is again ACK'd by the other. Then the connection is done but TCP stays on the line for a while to ensure that any lost packages have a chance to reach their destination.

Sunday, February 7, 2010

Lecture 5: TCP/IP(cont.) (Feb 3)

In today,s lecture Dr. Gunes firstly went over the previous lecture and reminded us CSMA, Ethernet and its architecture, IP addresses,IP formats, and the structure of IP Datagram. Then, he talked about TCP/IP model deeply. First, he explained IP flow control and error detection mechanisms. He talked about ICMP and showed the differences between different types of ICMP messages. Then, he explained UDP which is the other part of TCP/IP transport layer protocol. After the UDP, we begun discussing Transmission Control Protocol in depth. Dr. Gunes, explained the most important 4 features, which are connection-oriented, reliable, full-duplex and byte-stream, of TCP. Then, he talked about the TCP ports. One important thing regarding ports is that TCP and UDP ports have different namespaces, so we can use the same port number for both. Then, we talked about the TCP segment format and addressing issues in TCP/IP. Finally, we compared the TCP and UDP.

Monday, February 1, 2010

Lecture 4: TCP/IP (Feb 1)

In today’s lecture, Dr. Gunes spoke about Ethernet frames, the IP protocol and how sub-netting works. We learned the structure of the Ethernet frame and what each section of the frame is responsible for. As a class we discussed Ethernet addressing and how the 48 bit MAC addresses are used to address in a one to one. When a packet is going to be broadcasted to all the mac addresses in the sub-network the MAC address consisting of all 1’s. Internet Protocol is an extremely integral part of the internet as we know it. In class we discussed the distinct differences between MAC addressing and IP addressing. One of the interesting differences was that to broad cast to all IP addresses in the network all 0’s are used. We discussed how IP addresses are distributed across the continents and reasons for moving away from IPv4 which uses a 32 bit address towards IPv6 which uses 128 bit addresses. We talked about the different classes of IP addresses. We talked about how IP addresses have a host and network IDs and how the least significant bits can be masked to divide a sub-network and group hosts based on physical topology. Near the end of the lecture we went over Address Resolution. This is the process by which you can use the IP address of a host to query for its MAC address. Dr. Gunes ended the lecture talking about IP datagram structure and MTU’s. Over all this lecture was great for understanding the fundamentals about how IPs and MAC addresses are used at the lower layers to route traffic.

Lab 1 on TCP/IP protocols

You may post your questions and comments regarding the WireShark lab on IP, ICMP, Ethernet and ARP under this blog post.

Wednesday, January 27, 2010

Lecture 3: Protocols and Layering (Jan 27)

In this lecture, Dr. Gunes went further in depth on the topic of Protocols and Layering. A protocol was defined and described using the human conversation analogy. We compared and contrasted programs and processes. We talked about the server client relationship and how a server is a process running on a machine. Dr. Gunes spoke a lot about the OSI model and its importance in networked systems. We then went into depth of each of the 7 layers in the OSI model and talked about the Responsibilities and Issues corresponding to each layer. As a class we reached a fundamental understanding of how the headers of a packet are used to navigate each of the 7 layers. Particularly interesting to me was how the MAC address was used for low level routing. This was an extremely informative lecture that delivered a firm grasp of layering and protocol fundamentals.

Monday, January 25, 2010

Lecture 2: Introduction (Jan 25)

Today, Dr. Gunes made an introduction to Network Systems in this lecture. He gave the definition of "network". He explained and compared multi-access and point to point networks. Then, he talked about different types of networks(according to their sizes) which are Local Area Network (LAN), Wide Area Network (WAN) and Metropolitan Area Network (MAN). And, he compared these different types of networks according to their reliability and speed. Then, he went over the Internet. He talked about the components of the Internet. He showed the roughly hierarchical architecture of the Internet. He explained the differences between the Tier-1,2 and 3 ISPs. For example, a university network is Tier-3 type. Then, he talked about Internet service providers and Internet design goals and principles. An interesting thing was that there is no mention of security among the Internet design goals. Finally, he explained why we use layering, the advantages of layering and gave some layered system examples. For example, federal express is a good example of layered systems. In conclusion, I highly benefited from this lecture, since it was a good introduction to Network Systems.