As is well known by a person skilled in the art, in the field of telecommunications networks, a proxy, in its widest meaning, is a computer system whose function is to act on behalf of a user, i.e. as an intermediary, between end-points, e.g. clients and servers, in a network.
A series of links connecting two hosts is called a path. Communication between two processes at each end of a path is referred to as end-to-end communication. Such a process is generally called a network end-point. End-to-end communication is provided by network (e.g. IP, Internet Protocol), transport and (optionally) application layer protocols, which are so-called end-to-end protocols. The term “layer” is for example known from the 7-layer OSI model but is not restricted to this model here and can denote a layer or sublayer in any network with a layered protocol stack.
The term “flow” is used herein to refer to an end-to-end stream of packets identified by a source IP address, a destination IP address, source and destination port numbers and a protocol identifier.
An end-to-end protocol runs between two hosts if it is of the unicast type, or between a number of hosts if it is of the multicast or broadcast type, at the transport, session, or application layer. The end-to-end path of an end-to-end protocol, i.e. the series of links connecting the various hosts, often runs across multiple (sub-)networks. Some networks may exhibit characteristics such as high delays and/or high packet loss rates that adversely affect the performance that an end-to-end protocol may otherwise provide. Such networks are hereinafter referred to as “problematic” networks.
A commonly used solution to this problem is the implementation of a proxy running either on one side, or on both sides of the problematic network, so as to modify the behaviour of the end-to-end protocol in such a way that the above-mentioned adverse effects are mitigated or even eliminated.
A so-called “network-adaptive” proxy is any form of a transport, session, or application layer proxy for an end-to-end protocol (whether it is of the uni-, multi-, or broadcast type) that utilises explicit information from a problematic network to control the proxy in such a way as to improve end-to-end performance.
A typical example of a transport layer proxy is a TCP proxy that splits an end-to-end connection into two independent TCP connections. FIG. 1 shows a connection with two TCP proxies 101, 111 that runs between a client 12 and a server 13 on both ends of the network. Therefore, the end-to-end connection is split into three independent TCP connections:
a first TCP connection through a network 14 in accordance with the Bluetooth or IrDa (Infrared Data association) standard, between the client 12 and a first proxy 111, located e.g. in the user's mobile phone,
a second TCP connection through a radio access network 15, between the first proxy 111 and a second proxy 101, located e.g. in a Gateway GPRS Support Node (GGSN) between the radio access network 15 and the Internet 16, and
a third TCP connection through the Internet 16, between the second proxy 101 and the server 13.
The proxies influence TCP's end-to-end congestion control behaviour in such a way as to be less impaired by large delays.
The proxies shown in FIG. 1 are so-called “network-blind” proxies, i.e. they operate on a transport protocol in that all entities below the transport layer, such as lower layer protocols, network nodes, etc., are treated as a “black box”, without taking account of any of the parameters of these entities. The proxy state 171 is therefore independent from these parameters.
On the other hand, FIG. 2 (in which the same reference signs as in FIG. 1 are used for similar entities) illustrates network-adaptive proxies 102, 112 because, for the same end-to-end connection between a client 12 and a server 13 and with the same locations of the proxies 102, 112 as in FIG. 1, the proxies utilise explicit information about the problematic network from below the transport layer. As shown in FIG. 2, there is signalling between the problematic network and the proxies, and the proxies receive for example local information 18 from the radio access network 15, which influences the proxy state 172.
Since access to more information about a network's state usually allows a higher degree of optimisation, network-adaptive proxies are in general more effective in improving end-to-end performance.
Instances of what is defined here as a network-adaptive proxy are described in a White Paper by the company Packeteer, Inc., entitled “Controlling TCP/IP Bandwidth”, updated November 1998, available at the Web address http://www.packeteer.com, and in a paper by H. Balakrishnan et al. entitled “Explicit Loss Notification and Wireless Web Performance”, in Proc. IEEE Globecom, November 1998.
In these documents, specific items of local information that is known to the proxy are mentioned and are associated with specific actions performed on specific items of transport, session, or application layer state that a proxy maintains. However, other items of local information can be useful to a proxy and other actions associated with those other items of information are conceivable.
A proxy holds and maintains transport, session, or application layer state for each end-to-end connection that is proxied. Such state information may be described as a list of parameters, including e.g.:                measured, approximated or explicitly provided round trip times,        inter-ACK arrival times,        flow control windows, such as the congestion or the advertised window,        retransmission timers,        a list of TCP connections that are currently established or in the process of being established where the TCP client may potentially be served by the proxy (i.e. the connection may be proxied).        
It is impossible to list all relevant parameters that might be useful to be maintained in a proxy. For example, future protocols might use parameters that are unknown or unused today. Let S be the transport, session, or application layer state that is maintained by the proxy. A network-adaptive proxy uses S to influence the performance of the end-to-end connection.
Likewise, let N be the current state of the problematic network as experienced by a specific end-to-end connection. N may include the following parameters:                the measured or assigned bit rate that is available to a particular connection within the problematic network,        the delay that a particular connection experiences within the problematic network,        the flow's pipe capacity, which is the minimum number of packets (i.e. the minimum load) a flow needs to have in flight to fully utilise its available bandwidth, and above which packets are queued in the network (the flow's pipe capacity can be approximated from the bit rate and the delay),        the geographical location of the host that terminates a particular connection, such as the location of a mobile phone in a single cell, group of cells, or location area in a cellular network,        the network load experienced in that part of the problematic network in which the host that terminates a particular connection is located.        
The values of at least some of the above parameters may vary over time.
While a network-blind proxy has nothing but the state S defined above to influence the performance of the end-to-end connection, a network-adaptive proxy is capable of transforming the state S into a new state S*, using the current state N of the problematic network and a set of functions F, with the result that S* commonly improves end-to-end performance more than S does.
However, the shortcoming of the prior art network-adaptive proxies mentioned above is that they operate without taking account of the ratio between the effort of proxying and the potential performance improvement resulting from the proxying.
In fact, proxying an end-to-end connection is not always required in the sense that the benefit, i.e. the potential performance improvement, is only marginal and thus, does not justify the effort of proxying.
For example, there is usually little margin for improving the throughput of a TCP connection that only has a small pipe capacity (for instance, 2-4 transport layer segments) and experiences no or only a low rate of non-congestion related packet losses (e.g. packet losses caused by transmission errors). By contrast, a TCP connection that has a higher pipe capacity and/or experiences a higher rate of non-congestion related packet losses offers more margin for improving its throughput.
The pipe capacity is related to the flow's round trip time. A flow's round trip time (RTT) is the time it takes to send a packet from one network end-point to the other, get it processed at the receiving end-point, and send another packet back to the end-point that sent the initial packet. A flow's RTT varies dynamically, depending on such factors as packet size (transmission delays), queuing delays experienced by the packets in the network, and processing required at the receiving end-point. The packets a network end-point sends within the flow's RTT is called a flight of packets or simply a flight. Those packets are also referred to as the packets a network end-point has “in flight”. The number of packets a network end-point has in flight is called the flow's load.
Meanwhile, proxying every transport connection that runs across the problematic network might pose a high demand for computing resources, namely, processor, memory, port numbers, etc. on the platform that runs the proxy. This demand might make it impractical to operate a proxy, for either cost or technical reasons or both.
The present invention aims at overcoming the above-mentioned drawbacks.