The capability of estimating available capacity, end-to-end, over a data transfer path of a data communication system comprising a data network is useful in several contexts, including network monitoring and server selection. Passive estimation of available capacity of a data transfer path, such as estimation of bandwidth of an end-to-end data transfer path is possible in principle, provided all network nodes in the data transfer path can be accessed. However, this is typically not possible, and estimation of available end-to-end capacity of the data transfer path, is typically done by active probing of the data transfer path. The available capacity can be estimated by injecting data probe packets into the data transfer path, and then analysing the observed effects of cross traffic on the probe packets. This kind of active measurement requires access to sender and receiver hosts, typically data network nodes, only, and does not require access to any intermediate nodes in the data transfer path between the sending and receiving nodes. Conventional approaches to active probing require the injection of data probe packet traffic into the data transfer path of interest at a rate that is sufficient transiently to use all available capacity, and cause induced transient congestion of the data transfer path being estimated. If only a small number of probe packets are used, then the induced transient congestion can be absorbed by buffer queues in the nodes. Accordingly, no data packet loss is caused, but rather only a small data path delay increase of a few data packets. The desired measure of the available capacity is determined based on the path inter-packet delay increase. Probe packets can be sent in pairs or in trains, at various probing rates. The probing rate where the data path delay begins increasing corresponds to the point of congestion, and thus is indicative of the available capacity. Probe packets can also be sent such that the temporal separation between probe packets within a given probe packet train varies, so each probe packet train can cover a range of probing rates.
Methods using active probing are based on a model where probe packets are sent from a sending node to a receiving node in a data communication system. Typically, time stamps of the probe packets at the sender and receiving nodes are then used by an algorithm to produce estimates of the capacity of the data transfer path.
Examples of known methods for estimating capacity of a data transfer path used today include the so-called Trains Of Packet Pairs (TOPP) and Bandwidth Available in Real Time (BART) methods. The BART method can be regarded as an improvement of the TOPP method. See the document “Probing-Based Approaches to Bandwidth Measurements and Network Path Emulation” by Bob Melander, PhD Thesis, Uppsala University, 2003, for a description of TOPP and European Patent 1952579 for a further description of the BART method.
Further, the IETF IP Performance Metrics (IPPM) working group has defined two IP active measurement protocols: One-Way Active Measurement Protocol (OWAMP), RFC 4656, and Two-Way Active Measurement Protocol (TWAMP), RFC5357. OWAMP is designed for measuring one-way packet delay and one-way packet loss between two hosts. TWAMP is based on OWAMP. TWAMP is designed for measuring one-way and two-way (round-trip) packet delay and packet loss between two hosts.
In many networks, Quality of Service, QoS, mechanisms are included. Recent studies have shown that available capacity measurement methods may produce erroneous estimates in networks where QoS mechanisms are deployed, see e.g. “Available Bandwidth Measurements in QoS Environments” by Mark Bechtel and Paraskevi Thanoglou, Master Thesis, KTH, 2010.
One example of such a QoS mechanism is a traffic shaper, which is often implemented as a token bucket. A token bucket is an algorithm used in packet switched computer networks and telecommunications networks to check that data transmissions conform to defined limits on bandwidth and burstiness (a measure of the unevenness or variations in the traffic flow). The token bucket algorithm is based on an analogy of a fixed capacity bucket into which tokens, normally representing a unit of bytes or a single packet of predetermined size, are added at a fixed rate. When a packet is to be checked for conformance to the defined limits, the bucket is inspected to see if it contains sufficient tokens at that time. If so, the appropriate number of tokens, e.g. equivalent to the length of the packet in bytes, are removed (“cashed in”), and the packet is passed, e.g., for transmission. If there are insufficient tokens in the bucket the packet does not conform and the contents of the bucket are not changed. Non-conformant packets can be treated in various ways:                They may be dropped.        They may be enqueued for subsequent transmission when sufficient tokens have accumulated in the bucket.        They may be transmitted, but marked as being non-conformant, possibly to be dropped subsequently if the network is overloaded.        
A conforming flow can thus contain traffic with an average rate up to the rate at which tokens are added to the bucket, and have a burstiness determined by the depth of the bucket. This burstiness may be expressed in terms of either a jitter tolerance, i.e. how much sooner a packet might conform (e.g. arrive or be transmitted) than would be expected from the limit on the average rate, or a burst tolerance or maximum burst size, i.e. how much more than the average level of traffic might conform in some finite period.
Thus, in short, if a traffic shaper (also called traffic shaping node), e.g. a token bucket is provided in a data communication system, this may have a serious effect on available path capacity measurements resulting in an overestimation of the available path capacity.