Digital imaging technology is continually improving. Even inexpensive digital cameras produce images with high resolution and color granularity. However, as resolution and color granularity increase, the resulting image files increase geometrically in size. Consequently, image compression has become increasingly important to reduce the storage and bandwidth used to store and transmit image data, respectively.
Generally, there are two forms of image compression: lossy compression and lossless compression. Lossy compression truncates some of the image data, thereby sacrificing some image quality, for the sake of reduced file size. By contrast, lossless image compression fully preserves the visual content of the original image, reducing the file size only by eliminating bits of data that are not used for complete reproduction of the original image.
Lossless compression is preferred when image degradation is not acceptable. For example, lossless compression is selected when the image data is to be decompressed, edited, and recompressed, when the image data was acquired at great cost, or when highest image quality is imperative. Lossless image compression typically is used for medical imaging, mass media preproduction, reproduction of fine art, professional digital photography, and similar applications.
Conventional lossless compression techniques generally employ two phases. The first phase is a modeling phase, in which the image data is analyzed to develop a probabilistic model, determining the frequency with which certain values appear in the image data. For example, in a grayscale image of a landscape, there may be numerous elements with grayscale values that represent common shades of the grass and common shades of the sky, while there may be comparatively fewer elements with grayscale values that represent the shades of tree trunks and other less common features. The frequencies with which different grayscale values appear are analyzed and ranked.
The second phase is a coding phase that reduces the size of the image data based on the probabilities determined in the modeling phase. Entropy coding is commonly used to encode and compress the image. Analyzing the frequency with which various values are represented, entropy coding replaces the standard binary values of the more frequently occurring values with shortened series of bits, while assigning longer series of bits to those values occurring less frequently. Replacing the more frequently occurring values with shortened series of bits reduces the size of the resulting compressed image file. Thus, the content of the file is recoded in fewer bits, but without truncating any of the image data.
A two-phased modeling/coding approach can significantly reduce the size of the file and thereby achieve high coding efficiency. On the other hand, the modeling phase and encoding phase both consume extensive processing resources. Analysis of the statistical characteristics of the original image performed in the modeling phase alone is highly computationally intensive. Furthermore, depending on the entropy coding algorithm used, the coding phase also may involve extensive processing. Thus, while conventional lossless compression techniques may achieve high coding efficiency, the high coding efficiency is gained at the cost of processing time and resources.