An image sensor converts a visual image to digital data that may be represented by a picture. The image sensor comprises an array of pixels, which are unit devices for the conversion of the visual image into digital data. Digital cameras and optical imaging devices employ an image sensor. Image sensors include charge-coupled devices (CCDs) or complementary metal oxide semiconductor (CMOS) sensors.
While CMOS image sensors have been more recently developed compared to the CCDs, CMOS image sensors provide an advantage of lower power consumption, smaller size, and faster data processing than CCDs as well as direct digital output that is not available in CCDs. Also, CMOS image sensors have lower manufacturing cost compared with the CCDs since many standard semiconductor manufacturing processes may be employed to manufacture CMOS image sensors. For these reasons, commercial employment of CMOS image sensors has been steadily increasing in recent years.
For typical CMOS image sensors, the images are captured employing a “rolling shutter method.” In the rolling shutter method, the imaged is captured on a row-by-row basis within a pixel array, i.e., the image is captured contemporaneously for all pixels in a row, but the capture of the image is not contemporaneous between adjacent rows. Thus, the precise time of the image capture is the same only within a row, and is different from row to row.
For each row, the image is captured in its light conversion unit. Charges generated from the light conversion unit are then transferred to a floating diffusion node. The voltage of the floating diffusion node is then read out of each pixel in the same row to column sample circuits before moving on to the next row. This process is repeated until the image is captured by the pixels in all the rows, i.e., by the entire array of the pixels. The data is then read out sequentially or in some other sequence. The resulting image is one where the each row captured actually represents the subject at a different time. Thus, for highly dynamic subjects (such as objects moving at a high rate of speed), the rolling shutter methodology can create image artifacts.
To solve this image artifact issue of capturing high speed objects, a global shutter method may be employed. The global shutter method employs a global shutter operation, in which the image for the whole frame is captured in the light conversion units of the pixels at the exact same time for all the rows and columns. The signal in each light conversion unit is then transferred to a corresponding floating diffusion node. The voltage at the floating diffusion nodes is read out of the imager array on a row-by-row basis. The global shutter method enables image capture of high speed subjects without image artifacts, but introduces a concern with the global shutter efficiency of the pixel since the integrity of the signal may be compromised by any charge leakage from the floating diffusion node between the time of the image capture and the time of the reading of the imager array.
Specifically, in the rolling shutter method, the image signal is held onto the floating diffusion node (FD) for a significantly shorter time than the actual time of exposure in the light conversion unit, e.g., a photodiode. Thus the contribution of the generation rate of the FD is orders of magnitude smaller than the generation rate during the integration time in the light conversion structure, e.g., the photodiode. And this hold time on the floating diffusion is constant for all pixels in the imager array, making correction for any of its contribution simple with standard correlated double sample CDS techniques.
In contrast, the image signal is held onto the FD for varying amounts of time in the global shutter method. For example, the signal from the first row may have the least wait time which corresponds to the time needed to read out a single row, while the signal from the last row has the greatest wait time which almost corresponds to the full frame read-out time, during which the charge on the floating diffusion may be degraded due to charge leakage or charge generation. Any charge generations or charge leakage that occurs on the floating diffusion node during the wait time can have a significant impact to the quality of the signal that is read out of the imager.
A metric of the efficiency in preserving the initial charge in the pixel is “global shutter efficiency,” which is the ratio of a signal that is actually read out of the pixel to the signal that would have been read out immediately after the signal was captured by the pixel. Ideally, the signal read out should be exactly the same as the signal captured, i.e., the global shutter efficiency should be 1.0 in an ideal CMOS image sensor. In practice, this is not the case due to the charge leakage and/or charge generation, and the picture quality is correspondingly degraded.
In order to improve on the global shutter efficiency, it is necessary to reduce any change to the signal being held in the floating diffusion in the form of electrical charge. In view of the above, there is a need for semiconductor devices and circuits providing reduced changes in the signal stored in a floating diffusion.
Further, there exists a need for design structure for enabling the design and manufacture of such semiconductor devices and circuits.