In the Third Generation Partnership Project (3GPP), the fourth generation (4G) cellular network includes a radio access network (e.g., referred to as a long term evolution (LTE) network) and a wireless core network (e.g., referred to as evolved packet core (EPC) network). The LTE network is often called an evolved universal terrestrial radio access network (E-UTRAN). The EPC network is an all-Internet protocol (IP) packet-switched core network that supports high-speed wireless and wireline broadband access technologies. An evolved packet system (EPS) is defined to include both the LTE and EPC networks. EPS seeks to take mobile technology to the next level through higher bandwidth, better spectrum efficiency, wider coverage, enhanced security, and full interworking with other access networks. EPS proposes doing this using an all-IP architecture.
An admission control mechanism determines whether a new real-time traffic connection can be admitted to a network (e.g., an EPC network) without jeopardizing performance guarantees provided to already-established connections. The objective of connection admission control (CAC) is to guarantee a quality of service (QoS) for all connections in the network, while at the same time making efficient use of network resources (e.g., accommodating as many connections as possible). QoS refers to resource reservation control mechanisms that provide different priorities to different applications, users, and/or traffic (e.g., data flows), or guarantees a certain level of performance (e.g., a required bit rate, delay, jitter, packet dropping probability, and/or bit error rate (BER) may be guaranteed) to traffic. For example, the objective of CAC may be to determine whether a requested connection (e.g., an arriving call), with specified QoS requirements (e.g., packet loss ratio, delay, jitter, etc.), should be admitted to the network. A new connection request is admitted only if the request's QoS constraints can be satisfied without jeopardizing QoS constraints of existing connections in the network.
The implementation of CAC can be quite complex. A first conventional CAC mechanism allocates a specified bandwidth, at each network node (e.g., network resource) along an end-to-end connection path, based on peak rates (PRs) of the connections. A network typically monitors various rates (e.g., peak rates, average rates, and other rates) of applications and connections for its proper dimensioning. Although the first CAC mechanism guarantees a desired QoS along the connection path, the first CAC mechanism requires much wider bandwidth usage than might otherwise be necessary. In other words, under the first CAC mechanism, valuable network resources are wasted without any statistical multiplexing gain.
A second conventional CAC mechanism allocates bandwidth at each node based on average rate (AR) requirements of the connections. While the second CAC mechanism saves network resources by allocating (e.g., to the resources) less than a peak rate requirement, service quality will suffer when some or most of the connections request more than the average rate. This is not ideal because the service quality is subject to the unpredictable traffic conditions of the network (e.g., QoS cannot be guaranteed in some or most cases).