In data networking and queueing theory, network congestion is a condition where a system such as a data network has settled under load into a state where traffic demand is high but little useful throughput is available, with high levels of packet loss, delay, and delay variation.

Network protocols which use aggressive retries to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load has been reduced to a level which would not normally have induced congestion collapse. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestion collapse.

Experience of congestion collapse in the ARPANET in 1987 led to the development of the TCP network protocol.

Modern networks use congestion control techniques to try to avoid congestion collapse. These include exponential backoff in protocols such as TCP and Ethernet, and fair queueing in devices such as routers.

RFC 2914 addresses the subject of congestion control in detail.

See also:

External links