Compression of digital images using lossless schemes is an integral part of a wide variety of applications that include medical imaging, remote sensing, printing, and computers. Recent advances in digital electronics and electromechanics have also helped employment of digital images widely. The algorithms for compression (or coding) of images have become sophisticated, spurred by the applications and standardization activities such as JPEG (“Digital Compression and Coding of Continuous Tone Images”, ISO Document No. 10918-1). The lossy version of JPEG, introduced around 1990, gained an enormous following in the industry due to its simplicity, public domain software, efforts by the Independent JPEG Group (IJPEG), and availability of inexpensive custom hardware (C-Cube Microsystems). The lossless counterpart did not gain significant acceptance, but provided momentum in diversified research activities.
The primary approaches in lossless compression coding have used differential pulse code modulation (DPCM), followed by entropy coding of the residuals (W. Pennebaker and J. Mitchell, (JPEG Still Image Compression Standard, Van Nostrand Reinhold, New York, 1993). Recently, schemes that utilize transforms or wavelets have also been investigated and have gained acceptance (A. Zandi et al, “CREW: Compression with reversible embedded wavelets”, Proc. of Data Compression Conference, March 1995, pp. 212-221; F. Sheng et al, “Lossy and lossless image compression using reversible integer wavelet transforms”, Proc. I.E.E.E., 1998). However, the majority of the promising techniques have employed sophisticated DPCM and entropy coding techniques. These methods rely heavily on the statistical modeling of the data (source) (M. Weinberger et al, “On universal context modeling for lossless compression of gray scale images”, I.E.E.E. Trans. on Image Processing, 1996. Although such approaches have given excellent compression performance, they are cumbersome to implement and often inefficient as software programmable solutions implemented on digital signal processors (DSPs) or general purpose microprocessors. Efforts have been made to reduce the complexity of the statistical modeling portion in some of the best performing coders, CALIC (X. Wu et al, “Context-based, adaptive, lossless image coding”, I.E.E.E. Trans. on Communications, vol. 45, 1997, pp. 437-444), and LOCO (M. Weinberger et al, “LOCO-1: A low complexity, context-based lossless image compression algorithm”, Proc. of 1996 Data Compression Conference, 1996, pp. 140-149). Even with such efforts, the computational complexity is daunting. One primary reason for this is a context switch that occurs on a pixel boundary. This approach introduces several data dependent compute and control complexities in the encoder and the decoder.
What is needed is an image compression approach that reduces the computational complexity but retains many of the attractive features of the most flexible compression approaches. Preferably, the approach should allow selective uses of lossless compression and lossy compression for different portions of the same image, without substantially increasing the complexity that is present when only lossless compression or only lossy compression is applied to an image.