Terms used in the present description are defined as follows.                “Search shape”: an aggregate of search points around a target pixel of template matching, or the shape formed by the aggregate.        “Template shape”: a group of pixels used for calculating the degree of similarity between the target pixel and each search point when the template matching is performed, or the shape formed by the group of pixels. The same shape is used for a group of pixels around the target pixel and for a group of pixels around each search point, and the values of pixels at positions having the same relative positional relationship are compared with each other.        
In the field of image processing, as a technique of reducing noise when an image is taken and a deterioration of a deteriorated image, various denoising filters have been proposed. Among other things, it is known that denoising filters in accordance with a non-local means method (refer to Non-Patent Document 1) demonstrate a high denoising effect. Hereinafter, denoising filters in accordance with the non-local means method are referred to as NLM filters.
FIG. 18 is a diagram describing an NLM filter. In FIG. 18, one square cell is a search point, and an aggregate of search points is a search shape. P0 is a denoising target pixel, and Ps is a pixel of a search point in a search target. T0 and Ts are template shapes, and the shape of the template shape T0 of a comparison source is the same as that of the template shape Ts of the search target.
In the NLM filter, corresponding pixels in the template shape T0 of the comparison source and the template shape Ts of the search target are compared with each other, and the degree of similarity between the templates is calculated. In general, calculation of the degree of similarity between templates uses a sum of squared difference (SSD) or a sum of absolute difference (SAD).
FIG. 19 is a diagram illustrating inputs and an output of an NLM filter execution unit. Basically, an NLM filter execution unit 1000 inputs four pieces of information including a denoising target image, a search shape, a template shape, and a denoising coefficient and generates a resultant denoised image. As the denoising coefficient, a variance is given as a typical value when an original image, to which no noise is applied, is available, and an appropriate value is set by a user when an original image is unavailable.
The NLM filter execution unit 1000 calculates a denoised pixel value for each pixel as follows. In the following, an example which uses SSD for calculating the degree of similarity between templates will be described.
(1) Variable SW of the sum of weights is initialized to 0 and variable SP of the sum of pixel values is initialized to 0.
(2) The following processes are repeated for all the search points within a search shape.
(2-1) SSD is calculated as the degree of similarity between templates.
(2-2) Weight W=exp (−SSD/denoising coefficient)
(2-3) Sum of weights SW=sum of weights SW+weight W
(2-4) Sum of pixel values SP=sum of pixel values SP+weight W×(pixel value of search point)
(3) Upon completion of the processes of (2) for all the search points within the search shape, a denoised pixel value of a denoising target pixel is obtained by the following equation.(denoised pixel value)=sum of pixel values SP/sum of weights SW
The NLM filter execution unit 1000 performs a denoising process using a single value and a single shape for all the pixels of a denoising target image when a single value is given as each of the input denoising coefficient, the input search shape, and the input template shape, and performs a denoising process while switching a value and shapes for each corresponding point when a group of pieces of data corresponding to each pixel is given.
Moreover, in order to remove coding distortion, a denoising filter with a deblocking filter is installed in the “HM”, which is a test model of “High Efficiency Video Coding” of next-generation video coding standards, for which international standardization activities are currently being performed by the “Moving Picture Experts Group (MPEG)” and the Video Coding Experts Group (VCEG)” (refer to Non-Patent Document 2).