Lossless compression is a form of compression where an image may be reconstructed without any loss of information. Lossless image compression is required by medical imaging, satellite/aerial imaging, image archiving, preservation of precious art work and documents, or any applications demanding ultra high image fidelity. Furthermore, lossless image coding is a last step of many lossy image compression systems, such as lossless compression of transform coefficients in Discrete Cosine Transform (DCT) coding.
Generally, image compression algorithms implement one or more of the following functions: prediction, decomposition and encoding. A conventional method for implementing image compression is shown in FIG. 10. As shown, method 200 includes receiving image data (step 202), predicting image data (step 204), and subtracting the predicted image data from the received image data to form a residual image (step 205). The residual image is decomposed or transformed (step 206). The decomposed image is then encoded to form a compressed image (steps 208 and 210). The compressed image may then be transmitted to a receiver. The receiver may reverse the process to obtain the original image data.
Prediction of an image is useful in compression, because the range of a difference between the predicted value of a pixel and the actual value of a pixel is usually significantly smaller than the pixel value itself. Only the residual difference needs to be encoded once the predictor is known. There are a variety of prediction methods that may be used. A simple method uses the previous pixel value to predict the current pixel value. Other methods may include using a weighted sum of the pixels within a local area of an image. These are examples of spatial prediction.
A predictor may also include the intensity in another spectral band or the intensity at a previous time. Spectral predictors exploit the fact that while spectral bands contain different information, they are often highly correlated and, therefore, contain redundant information. By confining the redundant information to a single band (the predictor), the dynamic ranges of the remaining bands may be significantly reduced.
Decomposition uses one of a variety of transformations to compactly represent image data. Examples of these are discrete cosine transforms, wavelet spatial transforms, and the Karhunen-Loeve spectral transform. These achieve compression by describing the data as a weighted sum of known basis functions. Since the basis functions are known, only the weights need to be stored. With the right basis functions, the number of weights stored may be much less than the number of pixels in the original image. These transformations may be applied to the residual or directly to the image.
Encoding typically involves looking for repeating patterns of data values, generating a code book containing these patterns, and expressing the data as a list of references to these patterns. Lempel-Ziv (used in gzip and compress) and run length encoding (RLE) are examples.
There are variations and combinations of prediction, decomposition and encoding that have been implemented. One such combination is the JPEG 2000 standard. The JPEG 2000 standard skips the prediction step, decomposes the image with a discrete wavelet transform (DWT), separates the DWT coefficient into bit planes of DWT coefficients, and encodes the resulting bit stream.
Other algorithms that are either lossless or have lossless modes of operation include gzip, compress, USES, FELICS, JPEG-LS, LuraWave, ERIC, and CREW.
One may expect a conventional lossless compression ratio to range between 1.3:1 and 2.8:1 depending on data content. In a survey of literature published in the last five years, no publications have been found that showed significant improvement in compression ratios. This may suggest that there is a “wall” that lossless compression algorithms have been unable to surmount. The top end of the range of performance appears to be a 2.8:1 compression ratio. Many of the algorithms fall within a narrow range that lies between 2:1 and 2.8:1 compression ratios.
Examining improvements in compression rates (measured in bits per pixel), various algorithms have been tested on various image sets. The results for grayscale images are remarkably consistent. The newer algorithms typically improve the compression rate by anywhere from 0.01 to 0.1 bits per pixel (bpp). However, the rms rate variation between images is typically 0.7-0.8 bits per pixel (bpp). Thus, the differences between algorithms are generally insignificant when compared to the image-to-image differences in performance of a single algorithm.
Table 1 below lists the compression rates in bits per pixel of various conventional algorithms. Using a standard Lena image, the JPEG 2000 standard compression algorithm in the lossless mode does improve the compression rate versus the USES standard by 0.32 bits per pixel. This improvement comes at the cost, however, of a much more complicated algorithm that requires much more computing power.
TABLE 1Compression Rates (bits per pixel) for lossless compression of the Lena image.JPEGFELICSPNGSPhUSESJPEG-LSCALIChSPaCALICa20004.964.914.744.634.614.584.534.484.29
In addition, the surveyed algorithms do not appear to deal with coherent noise found in uncalibrated image data. Coherent noise manifests itself as stripes in the image and is a consequence of images being generated by detector arrays, in which each element has a different responsive. Calibration removes most of this striping, but in a remote sensing application where data is transmitted from the sensor in a satellite to the ground, calibration is generally applied on the ground. Striping causes discontinuities in the data which, in turn, reduces compression rates. Compression prior to transmission (to maximize the use of the available bandwidth) needs to deal with these discontinuities in order to maximize compression ratios.
Finally, with the exception of JPEG 2000, the algorithms shown in Table 1 do not allow progressive reconstruction as a feature. This feature first reconstructs the most important data, allowing transmission of the data to be interrupted with minimal information loss. This is an important feature for transmitting data over the Internet, for example. As suggested previously, the JPEG 2000 standard achieves this feature at the expense of high complexity and high computational load.
The present invention provides improvements in compression rate, deals with discontinuities in the image data and permits progressive reconstruction of the image without information loss.