When transferring data packets in a communication network they must be forwarded from a transmitting network node (transmitter) via one or more network node(s) (what are known as intermediate nodes) to a receiving network node (receiver). For this purpose each data packet comprises what is known as a header with control and routing information. A data packet also comprises a field for the payload in which the actual data is contained. Forwarding or transferring of data packets to a network node of the communication network is also called “packet switching”. When a data packet is received the header thereof is evaluated in order to be able to determine to which additional network nodes of the communication network the data packet must be transferred. For this purpose it is necessary for each of the network nodes of the communication network to know the topology of the communication network to be able to determine the next network node, as a rule based on a routing table. This process applies in particular to communication networks which are designed as Ethernet communication networks. In principle this process is also applicable in other communication networks, however.
Basically, there is the problem when transferring data packets that data packets arriving at a switching device, a particular network node or a network node of the communication network that forwards the data packet, only being forwarded after a delay. The reason for this is that simultaneous processing of a large number of data packets at one switching device is not usually possible. The delay is caused in particular by reading out and evaluating the data contained in the header.
The situation can occur in this connection where immediately after the start of a data packet having a low priority, a data packet having a high priority, for example from a different network node, arrives at the switching device and this is to be transmitted to the same network node as the data packet having a low priority. Without special handling the data packet having a high priority must wait until the transfer of the data packet having a low priority has completely finished. This causes what is referred to as a packet jitter. There are applications which are critical with respect to jitter, for example time synchronization between a plurality of network nodes of the communication network to IEEE 1588, with accuracy in the range of a microsecond being required.
A generic method for transferring data packets in a communication network is known from US 2005/0175013 A1 for avoiding such a situation. In this method it is proposed that data packets having different priorities are stored in different output buffers to effect transfer as a function of the priority of these output buffers. In one scenario it is provided that transfer of a data packet having a low priority is stopped as soon as a data packet having a high priority exists in the corresponding output buffer. Only after the data packet having a high priority has been completely transferred to the receiver is transfer of the data packet having a low priority resumed. This process is called “pre-emption with retransmission service policy”.
To simplify handling of data packets having a high priority in a switching device US 2005/0175013 A1 proposes putting an indicator bit in the header of the data packet, so data packets with an identification of this kind are given priority over other data packets. The method described in US 2005/0175013 A1 targets in particular the transfer of voice messages in a communication network working in accordance with the Internet protocol.