Many sophisticated emerging applications, such as voice over IP, multimedia conferencing, or distributed virtual reality, are difficult to deploy in todays internetworking infrastructure. This is mainly due to one requirement that all these applications share the need for guaranteed real-time service. These applications not only require high bandwidth, but predictable quality of service (QoS) such as jitter delay as well.
The QoS requirements at network level are typically specified in terms of bounds on worst-case end-to-end delay on the worst-case packet loss rate and on the worst-case delay jitter for packets of the connection. Other parameters may be specified as well, such as deadline miss rate. The desired delivery time for each message across the network is bounded by a deadline, a specific maximum delivery delay. This delay bound is an application-layer, end-to-end timing constraint.
If a message arrives after the deadline is expired, the message is useless and is typically discarded. For many real-time applications, it is not important how fast a message is delivered. Indeed, packets arriving early may need to be buffered at the receiver to achieve
constant end-to-end delay. Therefore, delay jitter, which is the variation in delay experienced by packets in a single connection, is a critical performance metric. For example, in video transmission, jitter may cause some frames to arrive early, and others to arrive late. Although the transmission of all frames satisfies the deadline requirement, the displayed movie may appear jittery. Same applies to streamed audio data.
Buffers at the receiver can be used to control delay jitter. The amount of buffer space required can be determined from the peak rate and the delay jitter of the delivery process and can be quite large for a network with no control of delay.
Important quality of services are especially delay jitter, delay, and packet loss. Delay jitter and packet loss obstructs proper reconstruction at the receiver whereas delay impairs interactivity.
The following section contains definition for the notions of streams, packets, and channels.
Streamed data is a data sequence that is transmitted and processed continuously. Streaming is the process of continuously appending data to a data stream.
A packet is a piece of data consisting of a header and a payload information. Packetizing is the process of decomposing data into a set of (small) packets, where the header is used to store information for reconstruction, e.g. a sequence number.
A data channel is a connection between two network units that is able to transport data.
Delay is the time between sending and receiving a packet. Delay jitter is the variation in delay. Packet loss is an infinite delay.
A common, used technique for streamed data is to use a buffer at the receiver for reducing delay jitter and packet loss against an increased overall delay. Hence there is a demand for optimization. Especially real-time streamed data, like video or audio streams, needs to be on-line processed, i.e., with small delay and small jitter delay.
A well known algorithm to solve this problem is to buffer streamed data and to replay the buffer at a constant speed to absorb delay variations and play-out packets at fixed deadline, called jitter absorption. Packets received after deadline are discarded.
A more sophisticated algorithm is to monitor delay and/or delay variation and adapt play-out time accordingly, called jitter adaptation. An application might then slow down play-out when delay increases to avoid loss and speed up play-out when delay decreases to reduce delay.
It is object of the invention to provide a method for reducing delay jitter, delay, and packet loss for streamed data connections.