This is a classic paper that introduced the notions of congestion control and avoidance in TCP. The work was motivated by the collapse of the Internet in ’86. TCP used a window only for flow control and sent packets as long as the receiver had enough buffer space. This lead to the "congestion collapse" of the Internet. Jacobson and Karels propose algorithms that are based on the “conservation of packets” principle. Under equilibrium, a new packet isn’t put into a network until an old packet leaves. The slow start algorithm is based on the fact that the receiver can generate ACKs no faster than the data packets can get through the network. The congestion window is exponentially increased until it hits the bandwidth-delay product of the network. They estimate the variance of the RTT and hence have a more accurate retransmit timer, preventing spurious retransmissions. Loss of packets is an implicit indication of congestion and a multiplicative decrease of the window size is the reaction to it. It follows an additive increase policy; AIMD is proved to be fair.
Overall, this was a fun and important paper to read. I specifically liked the fact that the paper proposed values for the constants in their equations. Typically, I find most systems papers leave that to be “appropriately set”.
1. TCP justifiably caters to a wide variety of networks and is conservative. Will MIAD or MIMD policy work in plentiful network settings, like datacenters? Might be interesting to explore…
2. Talking of estimating the retransmission timer, the paper by Karn & Partridge talking about the retransmission ambiguity is relevant and a good read.
3. In Section 3, the authors say that they do not want to modify all gateways in the Internet (for congestion bit). It seems striking that even that early, when the Internet was essentially a small fraction of what it is today, people did not want to propose wholesale changes.