The asynchronous transfer mode (ATM) environment is now widely recognized as the preferred way of implementing Broadband Integrated Services Digital Network (B-ISDN) multiservice networks for simultaneously carrying voice, data, and video on the network. ATM networks transmit an encoded video signal in short, fixed-size cells of information using statistical multiplexing.
An ATM network can transmit data using multiple priorities because it allows the terminal to mark each cell as either high or low-priority. ff congestion develops, the ATM network drops low-priority cells before high-priority cells are dropped. Video can be encoded to take advantage of multiple priorities by partitioning the video image into more and less important parts. The more important part, known as the base layer, typically includes enough basic video information for the decoder to reconstruct a minimally acceptable image, and is transmitted by the ATM network in the high-priority bit-stream. The less important part, known as the enhancement layer, is used to enhance the quality of the image, and is transmitted in the low-priority bit-stream. The partitioning of video data into high and low priorities is described in detail in the Motion Picture Experts Group Phase 2 Test Model 5 Draft Version 2, Doc. MPEG93/225, April 1993 (MPEG-2 TM5). Such methods include spatial sealability, frequency sealability, signal-to-noise ratio (SNR) sealability, and data partitioning.
One problem with ATM networks is that each network source is allocated less bandwidth than its peak requirement which results in a nonzero probability that cells will be lost or delayed during transmission. Such probability of loss or delay increases as the load on the network increases. In addition, cells may be effectively lost as random bit errors are introduced into the cell header during transmission. A lost or delayed cell has the potential to significantly affect the image quality of the received video signal because real-time video cannot wait for retransmission of errored cells. Lost cells in a given frame cause errors in decoding which can propagate into subsequent frames, or into a larger spatial area. An encoding method that provides for high video image quality at the remote end, even when there are cell losses on the network, is said to be resilient to cell loss. Cell loss resiliency, however, is less significant when there are no cell losses on the network, such as when the network load is low. Thus, it is desirable to encode video with good compression efficiency when network load is low, but with good resiliency to cell loss when network traffic becomes congested.
Prior an video encoding systems with resiliency to cell loss using the high and low priority transmission capabilities of ATM include adaptive encoders that dynamically modify encoding in response to information fed back to the encoder from the remote end. For example, one prior art system adjusts the partition between data encoded into high and low-priorities in response to cell loss, while using a fixed encoding algorithm, to improve the efficiency of statistical multiplexing. This prior art system is not entirely satisfactory because it requires that all sources on the ATM network adapt using the same partitioning scheme which complicates the call admission (i.e. connection) process. This results because the network needs to ascertain that a source will implement the adaptation prior to making the admission.
Another prior art system provides resiliency to cell loss by decoding the received signal to determine the number and addresses of the blocks contained in lost cells at the remote end. Then this determination is relayed to the encoder which calculates the affected picture area in the locally decoded image to allow encoding from the point of the errored blocks up to the currently encoded frame without using the errored area. This system requires that the decoder completely decode and process the transmitted bit-stream before any feedback can be relayed to the encoder. While this system provides for a measure of compression efficiency at low network loads, as the network load increases the feedback delay inherent in such a system can potentially defeat any advantage gained from adaptive encoding when the delay exceeds the real-time encoding requirements of the encoder.