Recent years have seen remarkable improvements in the functionality and performance of digital cameras and digital camcorders utilizing solid-state image sensors such as CCDs or CMOSs (which hereinafter may be referred to as “image sensors”). In particular, the rapid progress in semiconductor fabrication technology is making it possible to miniaturize the pixel structure of solid-state image sensors. As a result, the pixels and driving circuit of solid-state image sensors are enjoying higher degrees of integration, and solid-state image sensors are moving toward more pixels. Especially in the recent few years, the number of pixels in solid-state image sensors has come to exceed several million pixels to 10 million pixels, and solid-state image sensors with tens of millions of pixels have begun to be developed.
Contrary to this trend toward more pixels, there is also a belief that 10 million pixels are not necessary from the standpoint of human visual characteristics. The human retina has cone cells, which sense the color (RGB) of light, and rod cells, which sense brightness and darkness. The number of cone cells and the number of rod cells are said to be about 6.5 million and about 120 million, respectively. This indicates that, from the standpoint of color distinction, the human eye is similar to a color camera with 6.5 million pixels. Moreover, from the standpoint of display devices, it is considered that the number of pixels in current solid-state image sensors is already sufficient. For example, the full high-definition image resolution is 1920×1080, and even higher definition images would only come to a resolution of about 4096×2160. In view of the prevalence of display devices which are capable of displaying such images, the rate of increase in the resolution of display devices is not higher than the rate of increase toward more pixels of solid-state image sensors that are currently under development. In addition, a signal readout time for one image becomes longer in proportion to the number of pixels. Similarly, when images are recoded on a frame-by-frame basis, the amount of data also increases in proportion to the number of pixels.
Specifically, although resolution is in fact improved in cameras in which a solid-state image sensor with a large number of pixels is used, an excessive number of pixels may not be a requirement in view of the human visual perception, display device resolution, signal readout time, and amount of image data to be recorded. Therefore, there is also a notion that resolution may be somewhat sacrificed for attaining improvements in other characteristics.
If it is decided that resolution can be sacrificed, a plurality of pixels may well be regarded as one pixel in the processing. For example, as is disclosed in Patent Document 1, a sum may be taken of signals from a plurality of pixels for enhanced imaging sensitivity. Moreover, by taking a sum of many pixel signals, it becomes possible to cope with low-luminance subjects, such that the dynamic range at a low illuminance can be improved. This produces the same effect as in employing an image sensor having a plurality of pixels with different aperture ratios, which is disclosed in Patent Document 2 and Patent Document 3.