Designed for wartimes, it was important to ensure that temporary disruption and reconfiguration of the network shouldn’t affect applications. State of an ongoing conversation is stored at the end-hosts. The ability to cater to different application demands – latency, reliability, throughput – led to the decision that multiple transport protocols would be required (TCP and UDP). IP provided the basic building block, datagram, for higher level transport protocols. Also, the architecture makes minimal assumptions about the underlying physical connectivity characteristics – the physical network is only required to transport bytes. Distributed management was maintained by allowing different gateways to have their individual administrative policies. The cost of attaching a new node at that point was deemed expensive since it involved loading the TCP/IP stack on every end-host. Overall, the architecture made not much effort in defining the relationship between performance and architectural correctness. This was partly due to the goal of wanting to accommodate diversity.
1. Given that survivability was a key goal, was care taken to provide high degrees of redundancy in the network?
2. What is the solution for “adversarial” TCP behavior? What if my TCP is highly aggressive, hardly backs off, and hogs the bandwidth?
1. The design goals of the Internet seem amazingly well-thought out!
- The idea of “fate-sharing” provides scalability
- QoS (which became a rage only in the late 90’s) seem to have been given due consideration
- Distributed management shows itself in the form of BGP