The present technology relates to solid-state image sensors, methods for producing the solid-state image sensor, and electronic apparatus and, in particular, to a solid-state image sensor, a method for producing the solid-state image sensor, and an electronic apparatus which make it possible to obtain a better pixel signal.
In the past, a solid-state image sensor such as a CMOS (complementary metal oxide semiconductor) image sensor and a CCD (charge coupled device) has been widely used in a digital still camera, a digital video camera, and the like.
For example, the light that has entered a CMOS image sensor is photoelectrically converted in a PD (photodiode) of a pixel. Then, the charge generated in the PD is transferred to FD (floating diffusion) via a transfer transistor and is converted into a pixel signal at a level in accordance with the amount of received light.
Incidentally, in an existing CMOS image sensor, since a method by which pixel signals are sequentially read from the pixels on a row-by-row basis (a so-called rolling shutter method) is generally adopted, distortion sometimes occurs in an image due to a difference in exposure timing.
It is for this reason that Japanese Unexamined Patent Application Publication No. 2008-103647, for example, discloses a CMOS image sensor that adopts a method by which the pixel signals are read simultaneously from all the pixels by providing a charge retaining section in the pixel (a so-called global shutter method) and has an all-pixel simultaneous electronic shutter function. By adopting the global shutter method, all the pixels have the same exposure timing, making it possible to prevent distortion from occurring in the image.
Now, when a configuration in which the charge retaining section is provided in the pixel is adopted, the layout of pixels is limited. This may decrease the aperture ratio, resulting in a reduction in the sensitivity of a PD and the capacity of the PD and the charge retaining section. Furthermore, optical noise may be generated as a result of the light entering the charge retaining section retaining the charge.
With reference to FIG. 1, the light that enters the charge retaining section will be described. In FIG. 1, a sectional configuration example of one pixel of the CMOS image sensor is shown.
As shown in FIG. 1, a pixel 11 is formed of a semiconductor substrate 12, an oxide film 13, a wiring layer 14, a color filter layer 15, and an on-chip lens 16 which are stacked. Furthermore, in the semiconductor substrate 12, a PD 17 and a charge retaining section 18 are formed. In the pixel 11, a region in which the PD 17 is formed is a PD region 19, and a region in which the charge retaining section 18 is formed is a charge retaining region 20. Moreover, in the wiring layer 14, a light shielding film 21 having an opening in a region corresponding to the PD 17 is provided.
In the pixel 11 configured as described above, the light that has been concentrated by the on-chip lens 16 and has passed through the color filter layer 15 and the wiring layer 14 passes through the opening of the oxide film 13 and illuminates the PD 17. However, as indicated with solid-white arrows in FIG. 1, when the light is incident obliquely, the light sometimes passes through the PD 17 and enters the charge retaining region 20. If a charge generated as a result of the light that has entered the charge retaining region 20 being photoelectrically converted in the depth of the semiconductor substrate 12 leaks into the charge retaining section 18 retaining the charge, optical noise is generated.
Moreover, in recent years, as disclosed in Japanese Unexamined Patent Application Publication No. 2003-31785, for example, a back-illuminated-type CMOS image sensor has been developed. In the back-illuminated-type CMOS image sensor, since a wiring layer in a pixel can be formed on the back (a side opposite to the side on which the light is incident) of the sensor, it is possible to prevent the vignetting of the incident light caused by the wiring layer.
In FIG. 2, a sectional configuration example of one pixel of the back-illuminated-type CMOS image sensor is shown. Moreover, in FIG. 2, such components as are found also in the pixel 11 of FIG. 1 are identified with the same reference characters, and their detailed descriptions will be omitted.
As shown in FIG. 2, in a pixel 11′, the light illuminates the back side (a face facing an upper portion of FIG. 2) of the semiconductor substrate 12, the back side which is a side opposite to the front side of the semiconductor substrate 12 on which the wiring layer 14 is provided. Moreover, in the pixel 11′, the charge retaining section 18 is formed on the front side of the semiconductor substrate 12, and a light shielding layer 22 having a light shielding film 21 is formed between the semiconductor substrate 12 and the color filter layer 15.
In the pixel 11′ of the back-illuminated-type CMOS image sensor configured as described above, it is possible to increase the sensitivity of the PD 17. However, since the charge retaining section 18 is formed on the front side of the semiconductor substrate 12, that is, the charge retaining section 18 is formed in a deep region of the semiconductor substrate 12 for the incident light, it is difficult to prevent the leakage of light into the charge retaining section 18.
That is, as indicated with solid-white arrows in FIG. 2, the light that has passed through the on-chip lens 16 at an angle sometimes leaks into the charge retaining section 18 after passing through the opening of the light shielding film 21, the opening formed above the PD region 19. If the light leaks into the charge retaining section 18 retaining the charge, optical noise is generated.