An image sensor is a semiconductor device converting an optical signal into an electric signal and provided with a photoelectric conversion element.
Image sensors can be classified into Complementary Metal Oxide Semiconductor (CMOS) image sensors and Charge Coupled Device (CCD) image sensors. The CCD image sensors have the advantages of high image sensitivity and low noise, while they are difficult to be integrated with other devices. Besides, power consumption of the CCD image sensors is relatively high. The CMOS image sensors have the advantages of being simple in process and easy to be integrated with other devices, small in size, light in weight, low in power consumption, low in cost and the like. Therefore, with the development of technology, the CMOS image sensors have replaced the CCD image sensors more and more to be applied in various electronic products. At present, the CMOS image sensors are widely applied in static digital cameras, camera phones, digital video cameras, medical photographing devices (such as a gastroscope), vehicular photographing devices and the like.
A core element of an image sensor is pixels which directly affect a size of the image sensor, a dark current level, a noise level, imaging permeability, image color saturation, image defects and the like.
A pair of contradictory factors always pushes image sensors to develop gradually.
The first factor is an economic factor. More image sensor chips a wafer can produce, the lower the cost of the image sensor chip is. As pixels occupy most of the area of the whole image sensor chip, a size of each pixel is required to be relatively small to save the cost. That is, the size of the pixel in the image sensor is required to be reduced for the economic factor.
The second factor is an image quality factor. To ensure image quality, especially to ensure indexes such as light sensitivity, color saturation or imaging permeability, enough light is required to be emitted to a photoelectric conversion element (usually a photodiode is employed) of the pixel. A larger pixel can have a larger light-sensitive area to accept light, thus, a relatively large pixel unit can provide better image quality. In addition, besides the photoelectric conversion element in the pixel, considerable switching devices in the pixel, such as a reset transistor, a transmission transistor or an amplifier device (for example, a source follower transistor), also determine a dark current, a noise, image defects and the like. From quality of images, a larger device possesses better electric performance, and is prone to form images with better quality. Therefore, the size of the pixel in the image sensor is required to be increased for the image quality factor.
It can be seen that how to coordinate the contradiction above to achieve an optimal selection is a problem the image sensor industry always faces.
An existing image sensor always includes a pixel array consisting of a plurality of pixels. From the layout, the plurality of pixels can be spliced together to form a complete pixel array, and a shape of the pixels may be rectangular, square, polygonal (triangular, pentagonal, hexagonal) and the like according to requirements.
In the existing image sensor, structures of the pixels can be classified into a photoelectric conversion element with a three-transistor structure, a photoelectric conversion element with a four-transistor structure and a photoelectric conversion element with a five-transistor structure. With the photoelectric conversion element with the three-transistor structure, the photoelectric conversion element is directly electrically connected with a floating diffusion region, photo-generated electrons generated in the photoelectric conversion element are stored in the floating diffusion region, and the photo-generated electrons are converted and output through a source follower (SF) under sequential control of a Reset Transistor (RST) and a row selector (SEL) transistor.
Referring to FIG. 1, FIG. 1 schematically illustrates a cross-sectional view of a photoelectric conversion element with a four-transistor structure. A photoelectric conversion element 115 is usually a Photo diode (PD) electrically connected with a Floating Diffusion (FD) region 113 through a transfer transistor 114. A lead wire L3 (usually including a plug, interconnecting wires and the like) is electrically connected with a gate of the transfer transistor 114. A source follower transistor 112 is electrically connected with the floating diffusion region 113, and is configured to amplify a potential signal formed in the floating diffusion region 113. The lead wire L2 is electrically connected with the gate of the source follower (amplification) transistor 112. One terminal of the reset transistor 111 is electrically connected with a power supply VDD, another terminal of the reset transistor 111 is electrically connected with the floating diffusion region 113 to reset the potential of the floating diffusion region 113, and the gate of the reset transistor 111 is electrically connected with the lead wire L1. As can be seen from the above, the photoelectric conversion element with the four-transistor structure includes the transmission transistor 114 formed between the photoelectric conversion element 115 and the floating diffusion region 113 on the basis of a photoelectric conversion element with a three-transistor structure. The transmission transistor 114 can effectively inhibit noise. As a result, the photoelectric conversion element and the four-transistor structure may lead to better image quality and has become a leading structure in the industry. In addition, a set of four-transistor devices can be shared by a plurality of photoelectric conversion elements, so as to save a chip area, which structure is also considered as a four-transistor structure.
However, in the existing image sensor, the pixels have defects that are difficult to overcome.
First, in the existing pixel, four transistors are all planar structures. In other words, if the chip area needs to be further reduced, a size of these devices (such as the transmission transistor, the reset transistor and the source follower transistor) must be reduced. However, if the size of these devices is reduced, performance of these devices may be reduced accordingly. For example, a driving current of the device may be reduced, an electric parameter fluctuation is increased, and amplification efficiency may be reduced. These problems are quite serious for the quality of images. Therefore, although circuits at the periphery of the pixel array can further reduce line width according to the Moore law to reduce size, the size of the transistors in the pixel can only be reduced very slowly. However, the area of the whole image sensor chip is mainly determined by the pixel array, and therefore, the structure of the existing pixels limits the further reduction of the chip area, so that the cost of the existing image sensor is relatively high.
Second, in the existing pixel, four transistors are all planar structures. For a pixel with a certain size, the size can hardly be further reduced after the four transistors are accommodated, which limits a proportion of the photoelectric conversion element of a light sensing part to the pixel. For performance of the pixel, the smaller the proportion of the photoelectric conversion element to the pixel is, the less the light can be collected in a unit area, the less transparent the image is, the poorer gradation the image has, and the drier the color is. In summary, the planar structures of the transistors limit the further improvement of the image quality.
Third, in the existing pixel unit, the image quality under a dark field is quite essential. Key indexes for the image quality include a dark current, noise, white spots, dark spots and etc. The dark current, the noise, the white spots and the dark spots are derived from frequency noise and thermal noise of the transistors, and a surface composite current of the photoelectric conversion element. In traditional existing processes, even though a great effort is spent in these aspects, the ideal effect cannot be achieved due to the fact that the process limit has been reached. Therefore, a new image sensor and a corresponding process are needed to further reduce the dark current, noise, white spots, dark spots and other indexes.
Fourth, in the existing pixel, as each transistor is of a planar structure, parasitic capacitance among the transfer transistor, the reset transistor and the source follower transistor cannot be further reduced along with size reduction. The parasitic capacitance basically plays a negative role, for example, reducing a signal transmission rate, increasing low-frequency 1/f noise, and reducing a dynamic range, which are not acceptable by the image sensor. Therefore, the parasitic capacitance needs to be further reduced to reduce the low-frequency 1/f noise, so as to increase the signal transmission rate and the dynamic range, and this is a very tough and expensive task for the existing image sensor and the forming process thereof.
The Chinese patent application 201410193016.9 discloses an image sensor and a forming method thereof and provides a three-dimensional image sensor structure. A channel region of a source follower transistor is of a beam structure having a top surface and two side surfaces. A gate of the source follower transistor covers the top surface and at least one of the two side surfaces. In the application, process steps for forming the gate of the source follower transistor are difficult to achieve, so that performance of a semiconductor interface is affected. Therefore, how to form a gate with a good interface in a three-dimensional image sensor to improve performance of the image sensor becomes an urgent issue to be solved.