Tuesday, September 30, 2008

MACAW

Media access in wireless networks is a harder problem than wired networks. This paper gives a nice description of the lay of the land. CSMA senses the medium before transmitting but since collisions typically occur on the receiver, it suffers from the hidden and exposed terminal problems. MACA fixed this with a Ready-to-Send (RTS) and Clear-to-Send (CTS) control protocol. MACA used an exponential back-off timer which, as described in this paper, becomes a source for unfairness. The crux of the problem is that the back-off timer is a function of the number of tries that a node has made, bringing in inequality.

This paper proposes MACAW that proposes the following four modifications:

1. Fairness is achieved by nodes updating their back-off timer on hearing a packet.

2. DS and RRTS mechanisms to synchronize timing across senders

3. Location-tagged back-off timers, as congestion is more a function of a sender-receiver pair as opposed to a sender alone.

I vote to keep this paper in the reading list as it gives a good idea of the area stand-alone. I wasn’t happy about the experimental setting (base stations and pads) on two counts: One, it was unclear (and the paper doesn’t convince either) on how realistic this setting was to real wireless LANs. Second, I would have liked a description that helped me perform their experiments again, under different conditions. I didn’t understand what it would take to reproduce their experiments.

A Comparison of Mechanisms for Improving TCP Performance over Wireless Links

TCP was inherently designed for wired links that experience packet losses mainly due to congestion as opposed to corruption in wireless networks. The mechanisms to adapt TCP fall into three categories: link-layer loss correction using coding, tweaking TCP congestion control mechanisms, and split-TCP connection (base station to mobile host is a separate TCP connection). This paper compares the performance of the techniques under simulated losses. Techniques primarily focus on hiding things from TCP so that it doesn’t react too much. Link layer techniques suppress duplicate acknowledgements arising out of reordered packets so that the sender does not reduce its congestion window in reaction. End to end techniques avoid wastage using SACK and are a good option. Splitting TCP connections also helps insulate TCP from the vagaries of the wireless world.

I liked the second and third technique, with the latter being especially smart (though it is unclear if base stations can function at line speeds with this extra overhead). While the second technique is relatively easier to deploy, the third technique requires changing TCP semantics and base stations. Overall, this was a nice survey/evaluation sort of a paper and very informative. I vote for keeping it in the reading list.

This paper leaves itself open to the standard criticism of showing results based on simulations as opposed to real implementations. While, undoubtedly, the paper would be stronger with real evaluations, I think the lessons from the paper are still valuable.

Questions:


1. What eventually won the race? Do we have a modified version of TCP for wireless links now, or is it all uniform?

2. The techniques for improving TCP over wireless links seem to have no regard for the E2E principle. Maybe it is all about identifying the "correct" end-points...Can we have a discussion of this in class?

Monday, September 15, 2008

XCP

This paper describes the eXtensible Control Protocol and aims to address TCP’s inherent instability (overestimating and resizing the window), and conservative nature (slow-start). Like in RED, XCP involves the router to provide an explicit feedback. Each data packet has a feedback field updated by the routers along the path, and eventually returned to the sender via ACKs. Since every router updates the feedback field, the sender can identify the “weakest link” and behave accordingly. I think XCP is a very powerful idea and evaluations show that it clearly performs better than TCP.

While this paper leaves itself open to criticism of nodes manipulating the feedback fields and hijacking the router resources, I think that is a larger problem and wouldn’t criticize XCP in particular. Most ideas that propose differentiated servicing (QoS), have to deal with the problem of admission control and dishonest hosts. I think Scott’s paper on the future of the Internet also mentions the point of admission control. A possible solution to that problem could be on the lines of deploying some sort of a middle-box that monitors the tampering of the feedback field (assuming symmetric routes) and incorporation of the feedback by the hosts. Solutions like XCP can be very useful for solving DDoS problems.

Overall, I found the paper very interesting and suggest we keep it in the list. Also, if this stays, RED can be replaced by a different paper.

Random Early Detection Gateways for Congestion Avoidance

Random Early Detection (RED) aims to provide feedback to TCP connections before congestion occurs. It does this elegantly by marking packets probabilistically when the queue size is between an upper and lower water mark, and completely beyond that. The scheme uses simple mechanisms to estimate the running average of the queue length. RED results in small queue sizes and hence limited packet processing latency, as well as fair allocation of router resources (heavier flows probabilistically have more marks).

RED can be used either to drop or mark packets. Marking packets assumes well-behaved citizens in the network who will use the markings to reduce their flows. Dropping packets will require no changes to end-hosts and piggybacks on TCP’s interpretation of a drop as a sign of congestion. Clearly, RED would be “unfair” if nodes do not respond to marked packets.

I think the idea in this paper is neat and elegant, and something that can be easily deployed in the real-world (Is it?). But I was baffled that such a simple, neat idea needed such a long paper! While going to depth is definitely valuable, it might be better to flag the “less important” sections.

Wednesday, September 10, 2008

Core-Stateless Fair Queuing

CSFQ makes the important contribution of making Fair Queuing, with all its desirable properties, practical. The biggest problem with FQ was that the routers had to maintain per-flow state. The authors show that state maintenance can be pushed to the edges of the network to allow the core routers to be stateless. The authors motivate their work in the context of an “island” with a combination of edge and core routers (typically an ISP’s AS). This identification of the fact that all routers aren’t equal (in both capacity and loads handled), seems valuable for a whole bunch of QoS related ideas.

CSFQ approximates the behavior of FQ. Edge routers compute the rate of arrival for each flow and mark packets representing this rate. Core routers use FIFO queuing and probabilistically drop packets based on the proportion of packets that would have been dropped with fair sharing. Simulations show that CSFQ does almost as well as Deficit Round Robin, and of course provides intelligently designed state management.

In terms of presentation, I really liked the fact that the authors cautiously mention that their work is useful if storing state at routers “were” impossible, and not “are”. Overall, I really liked this paper and should be retained in the reading list.