Data compression is defined as the reduction in the space required for a set of data. Various methods of data compression are known in the art. Each data compression method includes an encoding scheme to encode the set of data. The purpose of encoding schemes is to reduce the data storage or transmission requirements for the representation of a set of data. Associated with the encoding (compression) method is a decoding (decompression) method to reconstruct the data from its encoded representation.
Quantization is an integral part of many data compression techniques. One quantization technique is the quantization of a sample sequence using information from neighboring samples. This is commonly referred to as sequential quantization. Two types of sequential quantizers are utilized: predictive coders and multipath search coders. Predictive coders predict the next sample and then quantize the difference between the predicted value and the actual value. The prediction is based on a combination of previously predicted values. Two well-known predictive coding schemes include delta modulation and differential pulse code modulation (DPCM).
DPCM is a class of lossy encoding schemes for signals. In a lossy encoding scheme, the reconstructed image does not match the original image exactly. Basically, in DPCM systems, the difference between a given sample and its predicted value is quantized and transmitted. The predicted value, which is obtained from previous predicted values and quantized differences, is also available at the receiver, since an identical predictor is used there. The receiver adds this predicted value to the received quantized difference in order to produce an approximation to the original sample in each case.
There are two types of DPCM coding schemes: fixed-rate and variable-rate. Fixed-rate schemes encode a given signal quantum (e.g., millisecond or pixel) with a fixed number of bits, such as eight bits. Fixed-rate schemes offer the advantages of simplicity, synchronous real-time encoding and decoding, and predictable buffer size or transmission time. Variable-rate schemes, on the other hand, require fewer or more bits in the less or more complex regions of the signal respectively. Variable-rate schemes offer the advantage of better compression for a given fidelity. In other words, variable-rate schemes offer better fidelity for a given compression ratio.
The operation of a DPCM system begins by filtering the signal to remove short-term excursions. For example, a noise removal filter or a low-pass filter can be applied. It should be noted that this type of filtering can be done as a separate first step or a similar effect can be achieved by modifying the primary encoding process. The purpose of removing noise is not solely to enhance the image, although this is a desirable side effect, but more importantly to improve signal fidelity at a given coded bit rate by suppressing the bit-expensive coding of such excursions.
The encoding process for each input sample comprises prediction, error quantization and code generation. Prediction includes predicting the next sample value based on some previous predicted values or differences. The predicted sample value is subtracted from the actual value of the sample. This results in an "error" value. The error value is replaced with (i.e., quantized as) a close value (e.g., the closest) in magnitude to the error value from a set of allowable values. The set of allowable values, which is denoted here as V={a1, a2, a3, ..., ak}, is usually small, often as small as two. The quantized error value is whichever of the a1, a2, ... ak is numerically closest to the actual error value. This is the quantization. The value chosen is used for subsequent prediction so that the state can be learned by the decoder. Often the set of allowable values varies depending on the state of the previous encodings. For example, in one scheme where V is of size 2 and a fixed 1-bit per point encoding is achieved, the values a1 and a2 are skewed in the direction of the immediately prior encodings. This provides a "momentum" effect and allows an accelerating signal to be encoded.
The quantized error value is encoded in some fashion to form the compressed signal. Bit encoding can be fixed or adaptive. In fixed bit encoding, a given quantized error value is always encoded with the same sized bit string. In adaptive bit encoding, the encoding varies adaptively to achieve near-optimal compression given a statistical history. Thus, in adaptive bit encoding, the encoding varies, thereby allowing variable sized bit strings to represent the quantized error value (in DPCM systems). Bit encoding can also be instantaneous or non-instantaneous. In the instantaneous encoding, the appropriate output code bits in a given context can be determined immediately from the token (e.g., quantized error value) to be encoded. In non-instantaneous encoding, state information is retained in the bit encoder such that a given output bit may be determined by a plurality of encoded tokens (e.g., quantized error values). Examples of non-instantaneous coding include arithmetic coding. Non-instantaneous adaptive bit encoding offers improved compression. For simplicity, most of these codes operate only on binary decisions. Such binary non-instantaneous adaptive entropy codes are known in the art, often referred to as arithmetic codes. The encoded quantized error values form the compressed signal data (e.g., image) which is stored or transmitted.
When the compressed signal is decoded (decompressed), the signal is reconstructed in the same order and similar state information is constructed. Each sample is again predicted and the quantized error value decoded. As done previously, the predicted and error values are added to form the reconstructed sample value. All of the reconstructed sample values form the decoded signal.
In the prior art, some DPCM systems are adaptive. These adaptive designs consist of either adaptive predictors or adaptive quantization techniques. One adaptive quantization technique is referred to as error signal normalization and involves a memoryless quantizer which changes the quantizer intervals and levels according to the standard deviation of the signal, which is known to it. Another technique referred to as a switched quantizer changes the quantizer characteristics to match the difference signal and uses a few previous sample differences, two in the case of video, to determine the state of the difference signal.
Another technique, specifically applicable to images and referred to as spatial masking, uses both the previous samples on the scan line and samples from the previous scan line. A masking function is developed from the weighted sum of the gradients around a given picture element.
Multipath search coders (MSCs) use both previous and future sample values to select a quantized version of a given input sample. One type of MSC technique is tree coding. Tree coding makes use of a tree structure, where each typical sample sequence is stored as a sequence of branches in the tree. When the sequence is selected, its corresponding tree path is transmitted as a binary sequence, with each bit indicating a direction to follow at each sequential node of the tree. In other words, while proceeding through the samples, the subsequent samples are usually less correlated to the first sample and, thus, have more possible values. In this manner, this MSC technique resembles a tree structure.
As will be shown, the present invention provides a fixed-rate scheme for encoding data which allows a variable number of bits in regions of varying complexity of the signal. Thus, the present invention encodes a set of data samples using fewer or more bits with less or more probable samples while providing nearly constant bit rate compression.