Conventional silicon wafers require a substantial absorption depth for photons having wavelengths longer than approximately 500 nm. For example, conventional silicon wafers having standard wafer depth (less than approximately 750 μm) cannot absorb photons having wavelengths in excess of 1050 nm.
As such, designing pixels using conventional silicon requires a deep collection element for photons that have a wavelength greater than approximately 500 nm. If photons incident upon a surface of the wafer and traveling into its depth are absorbed in a region deeper than the effective field of these pixel elements, the absorbed photons can generate photoelectrons that wander (diffuse) to adjacent pixels causing cross talk and lower resolution. In photodetector arrays, and in applications using the same, this can result in a blurring effect and a loss of accuracy in spatially-dependent applications such as imaging equipment. Such wandering photoelectrons in field-free regions also have a high probability of recombining before pixel collection resulting in lower sensitivity and efficiency.
CMOS imaging circuit can be characterized by a “device fill factor,” corresponding to the fraction of the overall chip area being effectively devoted to the pixel array, and a “pixel fill factor,” corresponding to the effective area of a light sensitive photodiode relative to the area of the pixel that may be used to determine the amount of silicon that is photoactive. The device fill factor in conventional devices is less than unity (1.0) because, as described above, a notable portion of the device beneath the pixel array area cannot be used for processing.
Moreover, the pixel fill factor in conventional devices is typically substantially less than about 1.0 because, for example, bussing and addressing circuits are fabricated around the base substrate layers of a pixel. As such, the bussing and addressing circuits limit the amount of space available for photodetection circuitry. Such bussing and addressing circuitry also limit the acceptance cone angle for electrons directed towards an imaging array.
Imagers can be front side illumination (FSI) or back side illumination (BSI). There are advantages and disadvantages to both architectures. In a typical FSI imager, incident light enters the semiconductor by first passing by a transistor and metal circuitry. The light, however, scatters off of the transistors and circuitry prior to entering the light sensing portion of the imager, thus causing optical loss and noise. A lens can be disposed on the topside of a FSI pixel to direct and focus the incident light to the light sensing active region of the device, thus partially evading the circuitry. BSI allows for smaller pixel architecture and a high fill factor for the imager, by allowing the transistors and circuitry to be located on the opposite of the where the incident light enters the device. This increase in fill factor and reduction in light scatter increases efficiency.