In ATM (Asynchronous Transfer Mode) devices, the buffering of data packets (also called cells) has great significance.
In ATM communications networks, both synchronous (more accurately: isochronous) data, such as speech or video data, and asynchronous data, such as occur in communications between data processing installations, are guided along the same physical connection paths. Each of these different data communication services has entirely different requirements regarding the quality of the data connection. For example, communication between data processing installations permit considerably lower (cell) loss probabilities than a speech communication.
It therefore makes sense to assign different priority classes to the data packets, and to treat them differently in the exchanges. This requires temporary memories, which guarantee an upper limit of the loss probability to the data packets as a function of priority, under the best possible utilization of the memory space.
The article "Priority Queuing Strategies and Buffer Allocation Protocols for Traffic Control at an ATM Integrated Broadband Switching System" by Arthur Y. -M. Lin and John A. Sylvester in the IEEE Journal on Selected Areas in Communications, Vol. 9, No. 9, December 1991, describes a generic access control method for a buffer as "partial buffer sharing".
Data packets, which are assigned to one of two priority classes, one high and one low, are stored together in a buffer. For each priority class, a threshold value for comparison with the occupancy level of the buffer is determined, before placing the buffer into service. The threshold value for the high priority class is determined for a value that corresponds to the maximum occupancy level of the buffer. The threshold value for the lower priority class is established at a value between 0 and the maximum occupancy level of the buffer, based on theoretical traffic calculations.
The buffer is organized as a FIFO-queue (First In First Out). There are several readout devices, which remove one data packet each from the lower end of the queue. The lower end of the queue contains the data packet that was written first into the queue.
High and low priority data packets arrive at the buffer in accordance with a predetermined random process. These data packets are treated as follows:
a) If it is a data packet of the high priority class, and there is still room in the buffer, the data packet is inserted at the upper end of the queue. PA1 b) If it is a data packet of the lower priority class, and the occupancy level of the buffer is below the size determined by the threshold value of the lower priority class, this data packet is also inserted at the upper end of the queue. In the other cases the incoming data packet is discarded and is thus lost.
This method of access control leads to the fact that only a part of the spaces in the buffer is useable for the data packets of both priority classes. The memory space corresponding to the difference between the maximum occupancy level of the buffer and the threshold value for the lower priority class, is reserved for the data packets of the higher priority class. This produces a lower loss probability for these data packets than for those of the lower priority class.
This takes into account that the buffer is always used somewhat less than a fully shared buffer, thus slightly increasing the total loss probability.
If an upper limit is to be guaranteed for the data packets of the higher priority class, in many practical application cases the use of the buffer is considerably below that which can be explained by this effect.