In an imaging device (single-plate solid-state imaging device) of a digital camera or the like, one color filter is generally provided for each photodiode to form a color filter array called a Bayer array, for example. Image data acquired by the imaging device is data (RAW data) having only color information on one color in each pixel, and can be seen as an image by interpolating color information lacking in each pixel from peripheral pixels.
For a subject having a frequency higher than a pixel spacing (pixel pitch) of the imaging device, failure to perform correct sampling causes moire and false colors. Particularly, in the imaging device having the Bayer array, a ratio of pixels of each of R and B to pixels of G is 1/2, leading to more likelihood of false colors.
As a general countermeasure to prevent this, an optical low-pass filter is inserted in front of the imaging device to lower the frequency of the subject. However, in this case, there are negative effects such as degraded resolution of the subject and cost increase caused by the optical low-pass filter.
Therefore, there has been proposed a method of reducing false colors by performing edge detection through edge detection filter processing on luminance components, detecting a false color by determining whether or not color-difference components are within a predetermined range, and then smoothing or taking an arithmetic average between false-color pixels and peripheral pixels in a region where the false color is detected (see, e.g., Patent Literature 1: Japanese Patent No. 4035688).
There has also been proposed a method of reducing false colors by taking two images at a focal position and a non-focal position of a lens, detecting false colors based on a difference between the two images, and then changing a mixture ratio of the two images (see, e.g., Patent Literature 2: Japanese Patent Application Publication No. 2011-109496).
However, among the above conventional technologies, when the processing is performed in a limited area for the false-color detected pixels and the peripheral pixels as in the case of Patent Literature 1, a special circuit configuration is required. This makes it difficult for a general-purpose image processing IC to perform the processing.
Also, when two images are taken as in the case of Patent Literature 2, a time lag occurs in shooting between the two images. For this reason, mixing of the two images also requires special processing such as extraction of feature points and alignment therebetween.