Currently, the Internet provides “best-effort” service that offers no preferential service per application. There have been proposals to upgrade the Internet with service differentiation by two major proposals discussed at the Internet Engineering Task Force (IETF), namely, integrated and differentiated services. There have been proposals for both premium (“better than best effort”) service and “lower than best effort.” None of these network-centric solutions has achieved a wide scale usage. Service differentiation of “normal priority” and “low priority” traffic is emulated in practice at the end-points by the protocols such as Microsoft's BITS, used widely for download of software updates with the aim to be non intrusive to user' experience. The common goal of these transport control protocols is to emulate the reference system of two priority classes, high and low, implemented at network nodes by strict priority schedulers that would, essentially, serve low priority traffic only in absence of high priority. The fact is that many file-transfer applications are transfers of large files that are human unattended and last for tens of minutes, hours or even days. The designers of the file transfer applications do want their transfers to achieve good throughputs, and may not have incentive to use the transport control protocols that emulate lower than best effort service, as by their very design they may often starve for periods of time in presence of any activity along the network path.
A common consequence is the preference to use standard TCP for bulk data transfers. For a file transfer using a single TCP connection, then the bandwidth-sharing objective is that of TCP fairness. In the case of a single bottleneck within TCP connections that all have some common mean roundtrip time, TCP fairness mandates allocating a fraction 1/n of the link bottleneck to each connection. This presumes this is the only bottleneck for these connections. The problem is that it is now the norm rather than the exception for end users to have several concurrent file transfers (e.g. peer-to-peer file sharing applications or, in general, parallel ftp transfers of large data volumes), resulting in throttling down any other connections to a minuscule TCP fair share of the bottleneck. For concreteness, consider a home user that has several computers at home interconnected with a high-speed LAN and connected to the Internet by a broadband connection. Suppose the user uses a peer-to-peer file sharing application that results in both upload and download file transfers and these may be typically long lasting. The home user would like her other, (sporadically run) interactive or on-line streaming applications not effected by the presence of long-run bulk data transfers. The user aim would be differentiation of bulk data transfers such that they achieve appreciable throughput while not hurting other traffic.
Thus, needed are processes and a system that addresses the shortcomings of the prior art.