At lower bitrates, compressed images often show some artifacts due to the quantization of transform coefficients. The nature of these artifacts depends very much on the chosen transform and quantization scheme. The artifacts are often called quantization noise. It is well known that wavelet-based noise removal is the state-of-the-art technique for removal of Gaussian White noise due to fundamental different theoretical properties of wavelet compared with Fourier bases. The characteristics of quantization noise are very different from Gaussian White noise and the familiar wavelet-shrinkage technique is not applicable.
The new image compression standard JPEG2000 (“J2K”) contains very sophisticated quantization schemes that are known to the encoder and decoder. See ITU-T T.800/ISO/IEC 154441:2000 JPEG 2000 Image Coding System.
Removal of compression artifacts caused by DCT compression is known in the art. Those artifacts are mostly blocky artifacts due to the 8×8 transform blocks. Typically, the position of the boundary between two transform blocks is known and the processing concentrated around those blocks. For more information, see Shen, M.-Y., Kuo, C.-C., “Review of Postprocessing Techniques for Compression Artifact Removal,” Journal of Visual Communication and Image Representation, vol. 9, pp. 2-14, 1998 and Xiong, Z., Orchard, M., Zhang, Y., “A Deblocking Algorithm for JPEG Compressed Images Using Overcomplete Wavelet Representations,” IEEE Trans. Circuits and Systems for Video Technology, vol. 7, pp. 433-437, 1997.
There has been some work on artifact removal at tile boundaries with wavelet compression systems, which is a similar problem to artifact removal at DCT block boundaries. The original JPEG Standard uses DCT, as opposed to the J2K, which uses a wavelet transform. For more information on the JPEG Standard, see ITU-T Recommendation T.81 |ISO/IEC 10918-1:1994, Information Technology—Digital Compression and Coding of Continuous-Tone Still Images: Requests and Guidelines.
Some postprocessing methods for removal of quantization noise in wavelet compression systems exist. In Nguyen, T., Yang, S., Hu, Y. H., Tull, D. L., “JPEG-2000 post processing” presented at a J2K meeting, 1999, a MAP-estimation algorithm is applied to the image which requires an estimate of the original image. In general applications that estimate is not available. Moreover, it is not possible to embed this technique into the decoder. Another approach is used in Wei, D., Burrus, C. S., “Optimal Wavelet Thresholding for Various Coding Schemes,” in Proceedings of ICIP '95, vol. 1, pp. 610-613, 1995, where the authors apply a simple wavelet-denoising algorithm to the quantized data. That denoising algorithm is specifically suited for Gaussian White noise. Quantization noise is by no means Gaussian White noise. In contrast, wavelet coefficients after quantization can take only a limited number of values given through the quantization. Therefore, the thresholding scheme for Gaussian noise that the authors set forth is not an optimal technique for removal of quantization noise. In Nosratinia, A., “Embedded Post-Processing for Enhancement of Compressed Images,” in Proceedings of Data Compression Conference DCC, pp. 62-71, 1999, an algorithm for artifact removal in DCT JPEG/wavelet-compressed images is presented that computes different shifts of the fully decoded image and clips coefficients to the quantization. However, this is also a postprocessing step that requires first a decoding of the entire image.
Correcting blurring is another problem for which image processing is performed. Sensing devices for digitizing images such as a scanner or a CCD camera typically produce a blurred version of the original image. Therefore, deblurring algorithms are necessary to produce a digital image that has the same degree of sharpness as the original image. Deblurring of images is a classical part of image processing. Typically, the blurring process is modeled by a convolution with a smoothing kernel. An inversion of this blurring is done by dividing by the convolution kernel in the Fourier domain. An exact inverse is only possible if the convolution kernel does not have any zeros in its frequency response. Even if the kernel satisfies this criterion in the presence of noise, the deblurring problem becomes an ill-posed problem since noise pixels may be magnified during the filter inversion. If the convolution kernel is not invertible, a regularized inverse is typically used where a regularization parameter manages the trade off between full inversion and noise suppression.
Recently, hybrid Fourier-wavelet-based techniques have been proposed in the literature to solve the deconvolution problem. In those approaches, the denoising part of the deconvolution problem is performed by wavelet shrinkage, the inversion of the convolution in the Fourier domain by classical filter inversion. For more information, see Abramovich, F., Silverman, B. W., “Wavelet Decomposition Approaches to Statistical Inverse Problems,” Biometrika, vol. 85, pp. 115-129, 1998; Donoho, D., “Nonlinear Solution of Linear Inverse Problems by Wavelet-Vaguelette Decomposition,” Journal of Applied and Computational Harmonic Analysis, vol. 2, pp. 101-115, 1995; Neelamani, R., Choi, H., Baraniuk, R., “Wavelet-based Deconvolution for Ill-conditioned Systems,” in Proceedings of ICASSP, vol. 6, pp. 3241-3244, 1998. S. Mallat, “A Wavelet Tour of Signal Processing,” Academic Press, 1998.
Enhancement of images in a subband decomposition, especially using the Laplacian pyramid, is known in the art. For example, see Ito, W., “Method and Apparatus for Enhancing Contrast in Images by Emphasis Processing of a Multiresolution Frequency Band,” Fuji, Japan, U.S. Pat. No. 5,907,642, issued May 24, 1999 and U.S. Pat. No. 5,960,123, issued Sep. 28, 1999.
U.S. Pat. No. 5,703,965, entitled “Image Compression/Decompression Based on Mathematical Transform, Reduction/Expansion, and Image Sharpening,” issued to Chi-Yung, F., Loren, P. on Dec. 30, 1995 discusses two operations: compression and image sharpening and smoothing. In that approach, which assumes the original JPEG compression scheme, the two operations are performed one after the other and not combined into one.