As more and more users become acquainted with the use of the Internet, the demand to service various applications imposed upon this packet-based network increases. To date, network operators are daunted with the challenges to deliver quality-of-service (QoS) in the midst of compounding network-related issues and the advent of more demanding applications and services. Network performance can degrade as observed from high latencies and incremental packet drops. Problems like link congestion that dramatically affect user perception of service being offered by the IP transport best-effort network must be resolved immediately. Any hindrance to meeting the required service level of agreement (SLA) has direct implication on the network business profitability. As such, their current resource management mechanisms need to be re-examined, and other alternatives should be explored and evaluated as well, in order to determine the best approach in addressing network congestion.
If a backbone link has a lower capacity implementation as opposed to the total capacity of the backhaul links aggregated to it, a capacity mismatch is present. This mismatch is a typical consequence for networks that have initiated on minimizing recurring costs by limiting capacity of each backbone link, such as T1/E1 leased lines, and making the most possible utilization out of every backhaul link terminated to the former. The premise that is widely considered by large network providers is that, due to the important number of customers boarded, the diversity of the locations, and the different types of service requested, there is a very low probability that a high number of connections will be active at the same time, as described by Fichou, et. al. U.S. Pat. No. 6,765,873 B1. This may therefore permit more connections to be established on the backbone link than its actual total bandwidth capacity can handle. However, problems arise when strict queuing disciplines applied are not sufficient to provide guaranteed service if too much traffic is being admitted. This is a result of customers who have significantly increased their usage while employing diverse applications that lead to the prevailing network congestion issue. On the other hand, adjusting the committed information rate (CIR) of the subscribers, as proposed in this prior art, will tend to violate the Service Level Agreement (SLA).
In a congested network, resource and admission control functions are required as presented in U.S. Patent document pending for approval, submitted by Dos Remedios, et. al., entitled “Methods and Systems for Call Admission Control and Providing Quality-of-Service in Broadband Wireless Access Packet-Based Networks”. In an embodiment of this invention, an Access Controller is defined and deployed in access to transport and transport to transport interface points where there is a capacity mismatch. The Access Controller executes buffer management and queuing, and further guarantees QoS by managing and sharing the bandwidth of the transport component of a lower capacity. It performs access control by authenticating the requester and allocates both committed information rate (CIR) and maximum information rate (MIR) to each subscriber terminal based on their respective user profile stored in the database. It can also be a crucial part of a Call Admission Control system that considers the physical transport capacities along with the on-going sessions to ensure service integrity. This Access Controller fundamentally performs policy enforcement of the Resource and Admission Control Functionality (RACF) as depicted in ITU Next Generation Network standards.
The concept of having total end-to-end QoS in NGN philosophies can be achieved if network providers are capable of accessing and managing their network resources efficiently. In order to implement the concept of congestion avoidance being a necessary and sufficient condition for QoS, the state of congestion within the entire network needs to be known. A feedback control mechanism must be used to relay the congestion state information to a Core QoS Manager which will in turn relay the proper control or threshold management signals to the Access Controllers. The policy enforcement decisions made by the Core QoS Manager are passed onto the Access Controllers to address congestion state at specific points in the network, whereby implementing the needed and adequate bandwidth management actions. The Core QoS Manager, therefore, provides the mapping that is necessary to correlate specific Access Controller actions to specific network congestion conditions. It furthermore requires a feedback control mechanism that is capable of responding within a time constant adequately faster than the rate of change of the congestion state in the network. Hence, this feedback control mechanism must adhere to Nyquist sampling rules. Both near real-time monitoring and feedback mechanism play crucial roles in the Resource and Admission Control Function (RACF). Updates on network congestion state spanning both access and core domains are fed into the Network Management System. This data is used by the QoS Manager to give specific instructions to the Access Controllers. Since the challenge exists when bottlenecks occur due to contention of resource, the use of dynamic bandwidth allocation to transport pipes must be enforced to protect the network and to make sure that traffic is controlled. Dimensioning for network scalability, on the other hand, can also be performed using data obtained from this feedback mechanism. Threshold settings for dynamic adjustment of bandwidth management policies on affected Access Controllers shall be defined and illustrated in the succeeding sections of this paper.