The provision of emergency services is increasingly important in telecommunications networks. Emergency calls or data flows must usually have priority over other calls or data flows in the network. The ITU-T Recommendation Y.1541, “Network Performance Objectives for IP-Based Services”, May 2002, specifies different priority types for IP networks.
In IP networks, resource management protocols on the data path have been investigated in recent years to ensure quality of service (QoS). Such protocols are responsible for ensuring that resource needs are met for data flows arriving at the edge of a network domain or autonomous system, and to ensure that the interior nodes of the domain are provided with information regarding the future path of the flow. This enables the interior nodes to make a local admission control decision. A flow is usually admitted into a network domain only if all interior nodes in the path have admitted it. A flow is admitted end-to-end only if all intermediate domains have made a positive admission decision. The admission of a flow also requires the reservation of resources in all interior nodes (except for pure measurement based admission control).
Integrated Services (IntServ) is one architecture adopted to ensure QoS for real-time and non real-time traffic in the Internet. The Internet Engineering Task Force (IETF) standardization organization has specified the Resource ReSerVation Protocol (RSVP) for reserving resources in IP routers, as specified in RFC 2205. Each router along the data path stores “per flow” reservation states. The reservation states are “soft” states, which have to be refreshed by sending periodic refresh messages. If a reserved state is not refreshed, the state and the corresponding resources are removed after a time-out period. Reservations can also be removed by explicit tear down messages. RSVP messages always follow the data path, and so RSVP can operate alongside standard routing protocols. If traffic is re-routed, refresh messages make reservations in the new data path.
In large networks the number of flows, and therefore the number of reservation states, is high. This can lead to problems storing and maintaining per-flow states in each router. Another architecture, Differentiated Services (DiffServ), has therefore been proposed to provide QoS in large-scale networks, and this is described in RFC 2475. In the DiffServ architecture, services are offered on an aggregate, rather than per-flow basis, in order to allow scaling up to larger networks. As much of the per-flow state as possible is forced to the edges of the network, and different services are offered for these aggregates in routers. This provides for scalability of the DiffServ architecture.
The service differentiation is achieved using the Differentiated Services (DS) field in the IP header. Packets are classified into Per-Hop Behaviour (PHB) groups at the edge nodes of the DiffServ network. Packets are handled in DiffServ routers according to the PHB indicated by the DS field in the message header. The DiffServ architecture does not provide any means for devices outside the domain to dynamically reserve resources or receive indications of network resource availability. In practice, service providers rely on subscription-time Service Level Agreements (SLAs) that statically define the parameters of the traffic that will be accepted from a customer.
The IETF Next Steps In Signaling (NSIS) Working Group is currently working on a protocol to meet new signaling requirements of today's IP networks, as defined in RFC 3726. The QoS signaling application protocol of NSIS is fundamentally similar to RSVP, but has several new features, one of which is the support of different QoS Models. One of the QoS models under specification is Resource Management in DiffServ (RMD). RMD defines scalable admission control methods for DiffServ networks, so that interior nodes inside a domain possess aggregated states rather than per-flow state information. For example, interior nodes may know the aggregated reserved bandwidth, rather than each flow's individual reservation. RMD also uses soft states (as with RSVP), and explicit release of resources is also possible. RMD also includes a “pre-emption” function, which is able to terminate a required number of packet flows when congestion occurs in order to maintain the required QoS for the remaining flows. This is described in WO 2006/052174.
A recent Internet Draft (“RSVP Extensions for Emergency Services”, F. Le Faucheur, et. al., draft-lefaucheur-emergency-rsvp-02.txt) specifies an extension of RSVP for supporting emergency services. It defines a priority policy element for RSVP and describes examples of bandwidth allocation model for admission priority.
When per flow methods are used (IntServ with RSVP or QoS-NSLP signaling), the handling of high priority flows is not an issue, since each node maintains per flow states. Where a decision must be taken to admit or pre-empt a flow, account can be taken of the priority of the flow in each router. In “stateless” domains, such as RMD or RSVP aggregation, the interior nodes do not maintain per-flow state information, only aggregated states (e.g., per-class). Therefore, they cannot associate data packets with priority information. In stateless methods the edge nodes are responsible for admission and pre-emption of flows, and they also have to make the priority decisions.
In the methods described in the Internet Draft “RSVP Extensions for Emergency Services”, (F. Le Faucheur, et. al., draft-lefaucheur-emergency-rsvp-02.txt) admission priority is taken into account. This means that these methods guarantee that higher priority flows can be admitted to the network in preference to lower priority flows. However, these solutions assume a slowly changing environment (i.e. a relatively slow increase of calls and no topology changes). The support of QoS, or priority handling in case of link or node failure, is based on per-flow states, which is not available with stateless protocols such as RMD.
RMD describes a method, known as a severe congestion algorithm, for ensuring QoS in a stateless DiffServ domain when rerouting takes place (due, for example, to link or node failure). If a router is severely congested (i.e. it is dropping a large number of packets), the RMD edge nodes terminate some of the flows in order to maintain QoS for the remaining flows. The priority of flows can be taken into account by preferentially dropping low priority flows, but the problem is not entirely solved.
This can be understood by considering the situation illustrated in FIG. 1. FIG. 1 is a schematic diagram of selected nodes in a stateless domain. The diagram shows an ingress edge 101, interior router 102 and egress edge 103. Suppose that there is congestion at the interior router 102. According to the RMD severe congestion algorithm, data packets are marked by the interior router in order to notify edge nodes about the congestion. The number of marked bytes indicates the excess traffic. In each egress edge node 103, the number of marked bytes is measured, and a decision taken to terminate a corresponding number of flows. This is achieved by the egress edge 103 sending a message to the ingress edge 101 to terminate the required flows. Priority is taken into account by selecting and terminating low priority flows.
However, it may be that terminating all of the low priority flows will still not be sufficient to overcome the congestion, in which case high priority flows will also be terminated. For example, suppose the traffic composition 104 at the egress edge node 103 is such that 90% of the flows are high priority calls 105. This may arise, for example, because this node directs traffic to an emergency centre. If 40% of all traffic has to be terminated, then all of the low priority calls 106 (10% of the total) will be terminated, but the congestion will still be present. The congestion can only be overcome by terminating approximately 30% of the high priority traffic in addition to all of the low priority traffic.
However, suppose there are many low priority calls passing the congested router 102, but which leave the network via other egress nodes. This situation is illustrated in FIG. 2, which shows nodes in a domain similar to that shown in FIG. 1. This time the domain has two different egress nodes 103, 203 that have different compositions 104, 204 of low and high priority traffic. In this example the first ingress edge node 103 has 10% low priority traffic 106 and 90% high priority traffic 105, as before. The second ingress node 203 has 80% low priority traffic 206 and 20% high priority traffic 205. This time, if there is a 40% overload at the router 102, both egress nodes would terminate 40% of the traffic. The second egress edge node 203 would be able to terminate only low priority flows, but the first ingress edge node 103 would still terminate 30% of its high priority traffic (as before). This is not desirable, because the second egress edge node 203 still has low priority flows 206 which could be terminated in preference to the high priority flows at the first egress node 103. If more low priority flows 206 were terminated by the second egress node 203, there would be no need for the first egress node 103 to terminate high priority flows 105.
As additional consideration is that, in emergency situations (i.e. when there are many high priority flows), link or node failures occur with higher probability than under normal conditions, leading to congestion. Thus there is a significant chance that high congestion and many high priority flows will occur at the same time. It is therefore important that networks should handle this problem properly.