1. Field of the Invention
The present invention relates to a solid-state imaging device, such as a CCD, and more particularly, to a solid-state imaging device having a wide dynamic range.
2. Description of the Related Art
When an object is photographed by using a solid-state imaging device, such as a CCD sensor or a CMOS sensor, there is desirably prevented changing of a higher-intensity portion of a photographed image into saturated white level or changing of a lower-intensity portion of the same into black or dark level. Specifically, in order to enable photographing of an image from a lower-intensity range to a higher-intensity range, the solid-state imaging device or a signal processing circuit must attain a wide dynamic range.
To this end, various methods for expanding the dynamic range of the solid-state imaging device have hitherto been proposed. For instance, according to a related-art technique described in JP-A-59-210775, both high-sensitivity pixels and low-sensitivity pixels are provided on the surface of the CCD sensor, and images captured by the high-sensitivity pixels and images captured by the low-sensitivity pixels are merged together, to thereby achieve a wide dynamic range.
A conceivable method for creating a difference between the high-sensitivity pixels and the low-sensitivity pixels includes, e.g., amethod for stacking an ND filter on the pixels of one type and a method for narrowing openings of light-shielding films provided on the pixels of one type. However, the high-sensitivity pixels and the low-sensitivity pixels are adjacently provided on the surface of the solid-state imaging device, and hence there arises a problem of a sensitivity ratio of the high-sensitivity pixels to the low-sensitivity pixels changing depending on the angle of light entering the surface of the solid-state imaging device or the distribution of light intensity.
According to a related-art technique described in JP-A-62-108678 (corresponding to U.S. Pat. No. 4,647,975), a single image is photographed twice by using the solid-state imaging device, and the thus-photographed images are merged together, to thus widen the dynamic range. Specifically, according to this related-art technique, a difference in sensitivity is achieved by means of a period of exposure time. A low-sensitivity image signal obtained by means of a short period of exposure time (i.e. a short period of storage time) and a high-sensitivity image signal obtained by means of a long period of exposure time (i.e., a long period of storage time) are merged together. However, according to this related-art technique, there exists a time difference between image data stemming from exposure of a short period of time and image data stemming from exposure of a long period of time. Particularly when a still image is photographed, there arises a problem of the technique being unsuitable for photographing a moving object, high-speed shutter operation, or strobo-light photographing.
According to related-art techniques described in JP-A-5-64083 and JP-A-6-141229 (corresponding to U.S. Pat. No. 5,420,635 and U.S. Pat. No. 5,455,621, respectively), a signal is processed such that a characteristic curve of high-sensitivity image data and a characteristic curve of low-sensitivity image data are smoothly connected together. As a result, the high-sensitivity image data are mainly used in a low-exposure energy domain, and low-sensitivity image data are mainly used in a high-exposure energy domain. Thus, an exposure energy value at which the output signal becomes saturated is shifted toward a higher exposure energy level than that achieved conventionally, to thereby attain a wider dynamic range.
When an attempt is made to widen the dynamic range of the CCD sensor by means of increasing the area of the light-receiving sections on the surface of the semiconductor substrate, to thus increase the amount of saturated electric charges, there is required a vertical transfer CCD capable of transferring electric charges which are equal in amount to the saturated electric charges. Specifically, when the area of the light-receiving sections is increased, there arises a necessity for increasing the quantity of electric charges transferred over the vertical transfer channel; e.g., a necessity for broadening the width of the transfer channel. Accordingly, when the chip size of the semiconductor substrate cannot be increased, there exists a limitation; that is, the difficulty in enlarging unilaterally the area of the light-receiving sections. Moreover, an increase in the amount of electric charges to be transferred and the transfer speed of electric charges raise another problem of an increase in the power consumed at the time of driving of the CCD.
An increase in the number of pixels resulting from miniaturization pixels conversely results in a situation there is no other way but to reduce the area of the light-receiving sections. For this reason, there is faced a problem of a further decrease in the amount of signal electric charges which can be handled. Noise components included in an output signal do not decrease proportionately with miniaturization of pixels.
When compared with a silver halide film, the solid-state imaging device is aid to be narrower in latitude (allowance) to exposure energy. For instance, when electric charges exceeding the amount of electric charges which can be stored in the light-receiving sections have arisen as a result of strong light having entered the solid-state imaging device, overflowed electric charges flow into adjacent pixels where no light enters, which in turn induces a known blooming phenomenon of a white area spreading with the area where the light has fell being taken as a center.
In order to improve the blooming phenomenon, there has hitherto been adopted a structure which discharges excessive electric charges to the outside before inflow of excessive electric charges into adjacent light-receiving sections. For instance, an example structure is described in C.H. Sequin Blooming Suppression in Charge-Coupled Area Imaging Devices, Bell Syst. Tech. J., 51, pp. 1923 to 1926 (1972). Specifically, in this structure, another electric charge storage area—which is called an overflow drain (OD) and identical in conductivity type with the electric charge storage section of the light-receiving section—is disposed beside the electric charge storage section. Excessive electric charges are collected into the overflow drain, and the thus-collected electric charges are wiped out of the device. This structure is generally called a lateral overflow drain (LOD), and the blooming phenomenon is greatly lessened by this structure.
However, when the lateral overflow drain structure is adopted for the CCD, overflow drains must be laid in a lattice pattern over the entire chip of the semiconductor substrate, which in turn raises a problem of an increase of the chip size.
As described in Y. Ishihara et al. “Interline CCD Image Sensor with Anti-Blooming Structure,” ISSCC Dig., Tech. Papers, pp. 168–169 (1982), a vertical overflow drain (VOD) has been developed. An N+/P-Well/N-substrate structure is formed in a depth-wise direction of a light-receiving section of substrate, to thus cause excessive electric charges to over-flow down to the substrate. Specifically, a bias voltage is applied between an n-type substrate and a P well layer formed in the substrate, thereby discharging the excessive electric charges overflowed from potential wells of the light-receiving sections to the n-type substrate.
In the vertical overflow drain structure, it is easy to uniformly control the substrate potential of the respective light-receiving sections, and hence the overflow level can be controlled by the applying substrate voltage or the voltage pulse width and timing. Since the area of the chip is not sacrificed, substantially all of the current CCDs adopt the vertical overflow drain structure, thereby significantly lessening a phenomenon called blooming or smear attributable to excessive incident light.
However, even when the CCD is provided with the overflow drain structure, an output signal level corresponding to incident light which exceeds a given quantity of light, is saturated. Hence, a signal difference (i.e., dynamic range of highlight level and dark level) cannot be determined over a wide range of incident light energy, which still remains a cause of narrowing the latitude.