Legacy transport protocols, such as the Transport Control Protocol (TCP), XNS, XTP, DDCMP, ISO TP4 and many others, together with protocols at layers other than the transport layer such as the MAC layer for IEEE 802.3 (known as Ethernet), were originally developed when data traffic volumes, data communication speeds and the number of data connections that could be established with a data network communication device were orders of magnitude less than in current data communication networks. Legacy transport and other protocols, however, continue to be widely used in contemporary data networks. These protocols have been extensively modified over time to make them more suitable for the parameters and capabilities of present data network configurations. However, even with the modifications that have been made to such legacy transport protocols, there are still drawbacks in their performance.
One such drawback of legacy transport protocols is their approach to data traffic congestion. Congestion occurs when the arrival rate of data into shared data buffer resources exceeds the drain rate, or egress rate, of data from the shared resources which in turn leads to the data buffer resources filling to capacity and becoming unable to store additional data. In legacy transport protocols, congestion is the presumed cause when data packet loss (i.e., packets being discarded in the network) is detected. Such data packet loss often occurs due to oversubscription of shared data buffering resources in a network device, such as data routers and/or packet switches, for example. In an oversubscription scenario with high or bursty data arrival rates and limited-size shared buffer resources, it is likely that newly arrived data packets will be discarded because the buffer resources will often be completely filled with data packets that have not yet been drained from the buffer resources.
When such data packet loss is detected in a data network implementing such a legacy transport protocol, a “client” data network device experiencing the congestion (by observing packet loss while sending to a “host” data network device) is configured to reduce its data transmission rates. If the congestion is resolved by this data transmission rate reduction, the data communication rates for all such client devices are gradually increased until they return to normal levels. If the congestion is not resolved, the data communication rates are further reduced until the congestion is resolved.
Such approaches may be inefficient for a number of reasons. A primary reason is inadequate detection and reaction times wherein a burst congestion event may start and finish before packet loss is noticed at the end-points (client or host devices). As a consequence any rate reductions by sending devices are delayed in time compared to the onset of congestion and may be inappropriate if the congestion no longer exists. As another example of inefficiency, packets that are lost (i.e., dropped) due to data congestion are then retransmitted by the originating client device due lack of acknowledgment from the host to the originating client device in response to the earlier transmission. This retransmission may create additional bursty data traffic in the data network, which may further contribute to data congestion. As another example, legacy transport protocols allocate a relatively large transmission window size to every client that has an open data communication channel with the host. Such an approach may result in frequent oversubscription of data buffering resources in the host as a result of transient traffic bursts, especially in network configurations where a large number of client devices (e.g., ten-thousand or more) have open data communication channels established with the host, in part, because the likelihood of simultaneous data traffic bursts by multiple client devices increases proportionally with the number of client devices connected to the host.