- Congestive collapse
Congestive collapse (or congestion collapse) is a condition which a
packet switched computer network can reach, when little or no useful communication is happening due to congestion.When a network is in such a condition, it has settled (under overload) into a stable state where traffic demand is high but little useful
throughput is available, and there are high levels of packetdelay and loss (caused byrouters discarding packets because their output queues are too full).History
Congestion collapse was identified as a possible problem as far back as 1984 (RFC 896, dated
6 January ). It was first observed on the early internet in October 1986, when theNSFnet phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, and continued to occur until end nodes started implementing Van Jacobson'scongestion control between 1987 and 1988.Cause
When more packets were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the end points of the network to retransmit the information. However, early TCP implementations had very bad retransmission behavior. When this packet loss occurred, the end points sent "extra" packets that repeated the information lost; doubling the data rate sent, exactly the opposite of what should be done during congestion. This pushed the entire network into a 'congestion collapse' where most packets were lost and the resultant throughput was negligible.
Congestion collapse generally occurs at choke points in the network, where the total incoming bandwidth to a node exceeds the outgoing bandwidth. Connection points between a
local area network and awide area network are the most likely choke points. A DSL modem is the most common small network example, with between 10 and 1000Mbit/s of incoming bandwidth and at most 8 Mbit/s of outgoing bandwidth.Avoidance
The prevention of congestion collapse requires two major components:
# A mechanism inrouters to reorder or drop packets under overload,
# End-to-end flow control mechanisms designed into the end points which respond to congestion and behave appropriately.The correct end point behaviour is usually still to repeat dropped information, but progressively slow the rate that information is repeated. Provided all end points do this, the congestion lifts and good use of the network occurs, and the end points all get a fair share of the available bandwidth. Other strategies such as 'slow start' ensure that new connections don't overwhelm the router before the congestion detection can kick in.
The most common router mechanisms used to prevent congestive collapses are
fair queueing in its various forms, andrandom early detection , or RED, where packets are randomly dropped before congestion collapse actually occurs, triggering the end points to slow transmission more progressively. Fair queueing is most useful in routers at choke points with a small number of connections passing through them. Larger routers must rely on RED.Some end-to-end protocols are better behaved under congested conditions than others. TCP is perhaps the best behaved. The first TCP implementations to handle congestion well were developed in 1984Fact|date=April 2007, but it was not until
Van Jacobson 's inclusion of an open source solution in Berkeley UNIX ("BSD ") in 1988 that good TCP implementations became widespread.
UDP does not, in itself, have any congestion control mechanism at all. Protocols built atop UDP must handle congestion in their own way. Protocols atop UDP which transmit at a fixed rate, independent of congestion, can be troublesome. Real-time streaming protocols, including manyVoice over IP protocols, have this property. Thus, special measures, such as quality-of-service routing, must be taken to keep packets from being dropped from streams.In general, congestion in pure datagram networks must be kept out at the periphery of the network, where the mechanisms described above can handle it. Congestion in the
Internet backbone is very difficult to deal with. Fortunately, cheap fiber-optic lines have reduced costs in the Internet backbone. The backbone can thus be provisioned with enough bandwidth to (usually) keep congestion at the periphery.ide effects of congestive collapse avoidance
WiFi
The protocols that avoid congestive collapse are based on the idea that essentially all data loss on the internet is caused by congestion. This is true in nearly all cases; errors during transmission are rare on today's fiber based internet. However, this causes
WiFi networks to have poor throughput in some cases since wireless networks are susceptible to data loss. The TCP connections running over WiFi see the data loss and tend to believe that congestion is occurring when it isn't and erroneously reduce the data rate sent.hort-lived connections
The slow start protocol performs badly for short-lived connections. Older
web browsers would create many consecutive short-lived connections to the web server, and would open and close the connection for each file requested. This kept most connections in the slow start mode, which resulted in poor response time.To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular web server.
References
* [http://tools.ietf.org/html/rfc2914 RFC 2914] - Congestion Control Principles, Sally Floyd, September, 2000
* [http://tools.ietf.org/html/rfc896 RFC 896] - "Congestion Control in IP/TCP", John Nagle, 6 January, 1984
*Introduction to " [http://ee.lbl.gov/papers/congavoid.pdf Congestion Avoidance and Control] ", Van Jacobson and Michael J. Karels, November, 1988ee also
*
Network congestion
*Network congestion avoidance
*Sorcerer's Apprentice Syndrome
*Cascading failure
* Cascade failure (Internet)
Wikimedia Foundation. 2010.