The Transmission Control Protocol/Internet Protocol (“TCP/IP”) has been widely deployed for file and data transfers. While TCP/IP is a robust protocol for many types of data transfers, this protocol exhibits certain characteristics that result in major throughput degradation when transferring data over long distances. The TCP/IP protocol was originally devised for local area networks (“LANs”) having negligible round trip time (“RTT”) and packet loss. In these types of network scenarios, the network throughput attained by TCP/IP closely tracks the theoretical maximum throughput of the network transport layer.
When used over wide area networks (“WANs”) covering long distances and having transports with varying transfer characteristics and delays, however, the rigid error recovery mechanisms of TCP throttle the overall throughput in an adverse manner. For long-haul networks with non-negligible RTT and packet loss, the overall throughput no longer depends on the transfer rates supported by the transport, but is a function of the RTT.
Because TCP/IP utilizes various mechanisms for congestion control, such as a sliding window, congestion avoidance, and slow-start, this protocol can become very bursty when the protocol encounters network delays and packet losses. In particular, the longer the latency, the more bandwidth-intensive TCP becomes while negotiating network congestion. This causes data transfers using the TCP/IP protocol over long distances, such as over inter-continental fiber optic links or geostationary satellites, to take considerably longer than necessary.
It is with respect to these considerations and others that the disclosure made herein is presented.