1. Field of the Invention
The present invention relates to techniques for removing noise from a digital representation (e.g., a digital image) using a set of overcomplete transforms and thresholding. The techniques may be employed in methods/algorithms which may embodied in software, hardware or combination thereof and may be implemented on a computer or other processor-controlled device.
2. Description of the Related Art
The signal in the additive independent and identically distributed (i.i.d.) noise problem continues to receive significant attention as it provides a benchmark for the accurate statistical modeling and representation of signals. The problem was first approached using a single transform and was later extended to overcomplete basis. Since then, the research in this area has thus far concentrated on obtaining better transforms and better thresholding techniques.
De-noising using linear transforms and thresholding relies on sparse decompositions with the utilized transforms. Under i.i.d. noise assumptions, it can be shown that transform coefficients with small magnitudes have very low signal-to-noise ratio (SNR), and a thresholding nonlinearity that effectively detects and removes (in the case of hard-thresholding) or reduces (in the case of soft-thresholding) these coefficients can be shown to improve the noise performance. Of course, this improvement is confined to the class of signals over which the utilized linear transforms provide sparse decompositions. However, if one imagines typically utilized localized transforms such as wavelets or block discrete cosine transforms (DCTs) over a particular image, it is apparent that many of the DCT or wavelet basis functions comprising the transform will overlap edges and other singularities. It is well known that sparsity properties do not hold over such features and de-noising performance suffers as a result. De-noising with overcomplete transforms tries to remedy this problem by averaging several de-noised estimates (corresponding to shifted versions of the same transform) at every pixel. It is hoped that some of the estimates will provide better performance than others, which will then be rectified in an average.