PTM transmissions, for example multicast or broadcast transmissions in IP (Internet Protocol) networks, are employed for the distribution of data from a source to a plurality of receivers. In a typical scenario, a multimedia data stream comprising audio data, video data or a combination thereof is distributed to the audience within the framework of a TV service, video-on-demand service, news or podcast service, etc. PTM-enabled data transport networks may be based on the IP protocol suite. The IP technology is widely employed in today's communication networks comprising not only the Internet, but also fixed and mobile telecommunication networks such as UMTS networks.
A, possibly large, fraction of all the data transported over a communication network may be classified as “Best Effort” traffic, i.e. the data packets are transported with the currently available forwarding capacities in the network nodes (insofar as data transport networks are concerned, the terms ‘node’ and ‘router’ may be used interchangeably herein). Forwarding delays and packet losses occur in case of overload situations in a node. Each packet suffers its individual delay (jitter), such that packets sent in a particular order by the source will arrive disordered at the destination. However, some applications require a minimum Quality of Service (QoS). Consider for example a real-time application such as a TV stream: While a small amount of packet loss will be tolerable, the packet delay (more precisely, the jitter) has to be limited to lie below a pre-defined maximum value.
From the point of view of the network, in order to be able to provide a better-than-best-effort QoS, some form of traffic management (traffic handling) is required. The required transmission resources need to be available in the transport nodes. More particularly, a congestion situation in a node typically may occur in case a forwarding capacity of an interface connecting or linking the node to the other nodes along a data path is exceeded. For example, an incoming or outgoing queue of an interface may be filled up to a pre-defined limit (a higher load would lead to an intolerable delay) or may already be filled up to its total capacity (a higher load would lead to packet loss). Traffic handling has to ensure that such overload situations (congestion situations) are avoided or at least resolved on short timescales (at least for data connections or data streams with QoS better than Best Effort).
Various techniques are known to provide a predetermined QoS for point-to-point (“PIP”) connections such as unicast connections in IP networks. One class of techniques is based on ‘resource reservation’; for example the Resource Reservation Protocol (RSVP) is widely known in this regard. During connection setup, an RSVP signaling message is sent along the intended data path (and back to the source) to trigger resource reservation in each router along the data path. In other words, before the traffic flow is sent, a particular (requested or available) QoS has to be agreed upon between all involved routers. Only if resource reservation is successful, the flow may be admitted into the network.
As the term ‘resource reservation’ indicates, RSVP requires per-connection (per-flow) reservation states in each router along the data path. Thus, in large networks a huge number of per-flow states needs to be maintained which requires considerable resources in the routers. Routers within the core part of a transport network (the core nodes) are typically configured for maximum forwarding capacity, i.e. only a limited amount of the processing resources should be spent for traffic management. Therefore, in large networks resource reservation is typically not employed. Instead, some class-based handling may be performed, which is known for example as “DiffServ” (short for “Differentiated Services”) handling. According to DiffServ, predetermined traffic classes are provided in the routers. The ingress node (the edge router at the edge of the network, which is the entry point into the network from the point of view of a data stream) marks each packet of an incoming data stream, and the core nodes sort and forward the packet according to the marking. In other words, DiffServ provides QoS on an aggregate basis rather than on a per-flow basis.
With the DiffServ approach, in a congestion situation data packets from many different traffic flows will be discarded. As there is no mechanism to control QoS on a per-flow basis, it is not possible in the network to terminate one or more of the flows in order to resolve congestion and to continue providing full QoS to the remaining flows. In the Internet Engineering Task Force (IETF) project, a lightweight resource reservation protocol has been proposed titled “Resource Management in DiffServ” (RMD) (see the RMD-QoSM draft “draft-ietf-nsis-rmd-12.txt” on the IETF webpage), which is intended to be applicable for large networks. RMD includes a pre-emption function that is able to terminate a number of packet flows as required in order to resolve a congestion situation, and in this way to maintain QoS for the remaining flows. RMD is specified for unicast flows only.
In a PTM transmission such as IP multicast, the content is distributed along a so-called distribution tree. Packets flowing from the sender and reaching a junction point (core node) of the tree are replicated therein to pass on to as many directions as required. Multicast routing protocols are deployed for establishing and possibly refreshing the distribution tree. As one example of such a routing protocol, the Protocol Independent Multicast (PIM) may be mentioned. In the so-called “dense mode” of PIM, the distribution tree is established by firstly flooding the multicast packets to the entire network. Then, in case a router has no subscribers or the same packet is received via multiple interfaces, the router removes itself from the tree. This behavior is called “Flood and Prune”. Alternatively, in “sparse mode”, PIM relies on pre-established rendezvous points in the network for setting up a distribution tree. A source specific distribution tree is subsequently established by optimizing the paths from the sender to the receivers. The distribution tree between the sender and the rendezvous points will be removed in a last step. In dense mode and in sparse mode, removing the superfluous part of the distribution tree is called ‘pruning’ and the PIM protocol provides the PIM PRUNE message, which is a signaling message indicating that the part of the distribution tree located at this particular router may be terminated and is to be sent upstream from the router into the direction of the sender.
While there is a general tendency that required bandwidths increase for data applications, this trend may be particularly relevant for present and future multicast applications such as (mobile) TV services, which can be expected to become increasingly important. As such high bit rate flows can cause correspondingly severe congestion situations in the data transport networks, congestion handling for multicast flows is particularly important in PTM-enabled networks.
Network traffic management may provide for an admission control mechanism located at the edge routers of the network acting as ingress nodes. Admission control may ensure that a new flow is blocked at the edge of the network, in this way avoiding congestion situations in the core nodes of the network. However, in case of node or link failure, for example, traffic flows have to be rerouted, and an overload situation may occur along the new route. Even in case of class-based (DiffServ) traffic management within the network, i.e. high-QoS flows such as real-time flows are separated from Best Effort traffic, the link capacities along the new route may not be sufficient. These situations cannot be handled by admission control mechanisms located at the edge of the network.
A known congestion handling for multicast flows in data transport networks relies on the receivers of the flow. These have to be adapted to provide some feedback, e.g. using the Real Time Control Protocol (RTCP), to the sender, wherein the feedback indicates a quality of the received stream. For example, the receivers may report back their perceived loss rate. However, as this kind of congestion control depends on the end application, it is not controllable from a network operator's perspective. Moreover, a large number of receivers may flood the network with their feedback messages. This may be prevented by some damping mechanism, but this in turn leads to a congestion control which may act only on long time scales of the order of minutes, while it would be preferable to have a congestion resolved on typical network timescales, i.e. on timescales of the order of the round trip time (typically some fraction of a second).
Another conventional multicast congestion control approach relies on group membership regulation. Examples are techniques such as Receiver Driven Layered Multicast (RLM) or Receiver-Driven Layered Congestion Control (RLC). Multiple distribution groups have to be provided and receivers may join and leave a particular group using the IGMP protocol, such that each receiver may control its level of participation in a session composed from several multicast groups. However, such mechanisms are very complex.
The RSVP technique may also be used for resource reservation in multicast cases. As it requires maintaining per-flow reservation states in the routers, it can only be applied in networks with a moderate number of flows.