The present invention relates to the fabrication of light-sensing devices based on, and integrated with, silicon Complementary Metal Oxide Semiconductor (CMOS) circuits.
There are two main light-sensor architectures in silicon technology: Charge Coupled Devices (CCDs), whose operation is based on MOS diodes, and CMOS image sensors whose operation is based on pn-junctions. The principles of operation of these devices can be found in “Physics of Semiconductor Devices” by S. M. Sze, Wiley, New York, 1981, Chapters 7 and 13.
For both device types the absorption efficiency as a function of wavelength is entirely dependent on the optoelectronic properties of bulk silicon. For that reason, both device types have significantly different responses for wavelengths at the extremes of the visible spectrum (violet/blue versus red). Also for that reason both devices have very low efficiency of near Infra-Red (IR) detection, and are incapable of detection of the 1.3 μm and 1.55 μm wavelengths, which are used for fiber optics communications. Yet again for that same reason both devices cannot provide “solar blind” Ultra-Violet (UV) detection. By “solar blind” it is meant that the wavelengths longer than the UV (visible and IR ranges) are not detected, i.e., the photons of those wavelengths are not absorbed.
For the visible range, CMOS image sensors have been gaining ground on CCDs on the basis of their greater degree of compatibility with the standard CMOS processing used for Microprocessors, DRAMs, DSPs, etc, and hence the possibilities for System on Chip (SoC), of which the image-sensor would be a key component. The straightforward integration with CMOS circuitry (logic and memory) allows higher functionality and lower cost. A cross-sectional view of a conventional CMOS imager is shown in FIG. 1. CMOS Imagers are thus fast becoming the image sensors of choice for many products such as digital cameras & camcorders, PC cameras, imagers for third generation (3G) cell phones, etc. CMOS imagers also offer advantages in terms of circuit design, given the possibility of random access to each pixel, the inclusion in individual pixels of circuitry which can speed up signal amplification and processing, leading to overall improved quality of image.
Another perceived advantage of CMOS imagers, from a process technology standpoint, is that pixel scaling could just follow the fast pace of scaling CMOS transistors, thus benefiting from enormous economies of scale with respect to technology development. However, it is becoming evident that for advanced CMOS technologies (e.g. below 0.25 μm) there are fundamental issues that prevent a modular straightforward integration of image sensors with standard CMOS logic.
At the present time, all evidence leads to the conclusion that the problems of integrating light sensors with standard CMOS becomes worse as the critical dimensions become smaller (more advanced CMOS technology). This is especially true for the detection/absorption of photons with the longer wavelengths of the visible spectrum (red color: λ≈650 nm).
The scaling of CMOS imagers faces two types of issues: semiconductor physics and technological problems.
The semiconductor physics problem is related to the thickness (depth) of silicon necessary to absorb enough light in order to produce a useful electrical signal. This is determined by the band-structure of the active-layer of the photodiode.
On the other hand, it is also a technological issue because the depth of trench isolation between CMOS devices, and the source/drain junctions become shallower with each new, more advanced, CMOS generation. When the depth of the isolation trenches becomes less than the distance required to absorb photons of a particular wavelength, then it is no longer possible to separate the charge carriers generated by light penetrating silicon at adjacent pixels. Consequently, there is a loss of resolution for the detection of that color.
Similarly, when electron-hole pairs are generated far away from the increasingly shallower metallurgical junction between source/drain and substrate/well, the electric field is very weak, thus the charge carriers travel by diffusion very slowly towards the electrodes. This increases the probability of recombination occurring before reaching the electrodes, thereby reducing the photocurrent, which impacts parameters such as signal-to-noise ratio and speed of image acquisition.
Therefore, there are two contradictory requirements: on one hand there is a fixed parameter, the coefficient of absorption, in silicon for the absorption of light, which is different for each primary color/wavelength, and on the other hand, the advancement of CMOS technology requires devices with shallower junctions and shallower trenches between adjacent MOSFETs.
Thus far, the workarounds used to solve these problems, have consisted in tuning the standard CMOS process flow, for a given technology generation, and introduce process steps specific to the devices related to the light-absorbing areas. These extra steps provide the necessary trench depth and junction depth/profiles for the light sensor devices. However, these extra process steps do not address the photodiode device architecture and/or materials, which remains unchanged.
It must be highlighted that some of these special steps are performed quite early in the CMOS process flow, and because of that, there is an impact on other subsequent process steps/modules, thus requiring the modification/fine-tuning of the latter.
For example, shallow trench isolation (STI), which is the standard isolation technology for 0.25 μm CMOS and below, is one of the first process modules in the long list of steps to fabricate CMOS devices.
Other steps requiring modification for the light sensor devices are the potential well and the junction doping profiles, thus changing ion-implantation steps and thermal annealing/activation steps. With these modifications the fabrication of image sensors is no longer modular and requires extended adjustments for every new CMOS generation. The extension of the adjustments becomes more severe with increasingly smaller critical dimensions of inherent to advancement of CMOS technology.
At the moment, optoelectronic transceivers for fiber optics communications, consisting of a light sensing device (photodiode) and trans-impedance amplifiers, are made with different III/V compound materials, and are not made monolithically integrated. The photodiode is made with a material sensitive to the wavelength of interest, for instance 1.3 μm or 1.55 μm or other wavelengths, and the trans-impedance amplifiers are made with wider band-gap materials for high-speed electronics.
Alternative device architectures for light sensing do exist, but fail to fulfill all the necessary requirements for manufacturable high quality photo-detection in the visible or invisible range, or have not been able to compete with standard straightforward CCDs and CMOS imagers.
One of the most interesting alternative device architectures is the Avalanche Photo-Diode (APD), which in spite of being known for decades, only very recently started to gain attention as a possible light sensor to be integrated into BiCMOS processes (Alice Biber, Peter Seitz, Heinz Jäckel, “Avalanche photodiode image sensor in standard silicon BiCMOS technology”, Sensors and Actuators A 90 (2001) pp. 82-88). However, for reasons related to the quality of the active-layer material (ion-implanted bulk silicon), and for other reasons related to the device architecture (bulk lateral pn-photodiode) this kind of APD has not, and is not likely, to succeed in competing against CCDs and CMOS imagers.
Avalanche mode is not a viable option with conventional CMOS light sensors because the avalanche mode requires the depletion region of the pn-junction to be under an electric field near the breakdown field of the junction, which for silicon is close to 500 kV/cm. For a depletion region about 200 nm wide (deep) for example, that would require about 10 volts. Such large voltage could break the gate oxide of the MOSFETs in whose junction the photo-absorption takes place. It would also prevent the seamless integration at the electrical level, with the CMOS logic outside the pixel.
Larger coefficients of absorption for the visible range, infra-red, including 1.3 μm and 1.55 μm wavelengths as well as other wavelengths of interest, can be achieved through the incorporation of materials with band-gaps different than that of silicon (see for example S. M. Sze, “Physics of Semiconductor Devices”, Wiley, New York 1981, FIG. 5, p. 750, and FIG. 6, p. 751).
There are many examples of materials, which may be either grown or deposited by Chemical Vapor Deposition (CVD) or Physical Vapor Deposition (PVD) on silicon.
Examples include Si1−xGex, Si1−yCy, Si1−x−yGexCy, PbTe, ZnS, GaN, AlN, Al2O3, Pr2O3, CeO2, CaF2, Sr2TiO4, etc.
As an example of work to date, the integration of crystalline Si1−xGex, and/or Si1−x−yGe xCy p-i-n photodiodes has only been possible with Heterojunction Bipolar Transistors but not with standard CMOS [3] (“Monolithically Integrated SiGe—Si PIN-HBT Front-end Photoreceivers”, J. Rieh et al, IEEE Photon. Tech. Lett., Vol. 10, 1998, pp. 415-417).