Recently, digital cameras or camera phones have been rapidly developed and commercialized. Such a digital camera or camera phone senses light using a semiconductor sensor. As the semiconductor sensor, a Complementary Metal-Oxide Semiconductor (CMOS) image sensor or a Charged Coupled Device (CCD) sensor is generally used.
A CMOS image sensor is a device that converts an optical image into an electrical signal using CMOS manufacturing technology. In the CMOS image sensor, MOS transistors, the number of which corresponds to the number of pixels, are formed, and a switching scheme of sequentially detecting output using the MOS transistors is employed. Compared to a CCD image sensor that has been currently and widely used as an image sensor, the CMOS image sensor is advantageous in that a driving method is convenient, various scanning schemes can be implemented, a signal processing circuit can be integrated into a single chip to miniaturize products, compatible CMOS technology is used to decrease manufacturing costs, and power consumption can be greatly decreased.
FIG. 1 is a view showing a CMOS image sensor having regular quadrilateral unit pixels. As shown in FIG. 1, around the pixel array 110, a row decoder 130 to designate a row address is arranged on any one side of a pixel array 110, and a column decoder 150, which is connected to the data output of pixels and serves to designate a column address of the pixels, is arranged perpendicular to the row decoder 130.
In detail, a process of extracting data from an image sensor is performed so that the row decoder 130 selects a first row and the column decoder 150 extracts the data of respective pixels from the first row, and then amplifies the data of the respective pixels. Further, the row decoder 130 selects a second row, and the column decoder 150 extracts data of respective pixels from the second row, and then amplifies the data of the respective pixels. Through this method, data of all pixels are extracted.
Various types of pixels are used as the pixels for the CMOS image sensor. As representatively commercialized pixel types, there are a 3-Transistor (3-T)-type pixel composed of three basic transistors and one photodiode, and a 4-T-type pixel composed of four basic transistors and one photodiode.
FIG. 2 is a circuit diagram showing a typical 3-T-type unit pixel in a CMOS image sensor.
Referring to FIG. 2, the 3-T-type pixel of the CMOS image sensor includes a single photodiode PD for converting photons into electrons, and three NMOS transistors. The three NMOS transistors are a reset transistor Rx for resetting the potential of the photodiode PD, a drive transistor Dx for changing current flowing through a source follower circuit, which is composed of the drive transistor Dx, a selection transistor Sel, and a DC gate, depending on variation in the voltage of a Floating Diffusion (FD) electrode and changing the output voltage of a unit pixel, and the selection transistor Sel for selecting the row address of the pixel array.
In this case, the DC gate denotes a load transistor in which a constant voltage is applied to the gate thereof and which allows constant current to flow through the gate, Vcc denotes a driving voltage, Vss denotes a ground voltage and Output denotes the output voltage of a unit pixel.
That is, the unit pixel of the CMOS image sensor includes a photodiode, a transistor for resetting the photodiode, and three source follower circuits. If the photodiode PD is reset to the voltage Vcc by the reset transistor Rx, and light is radiated to the reset photodiode PD, electrons and holes are formed in the junction area of the photodiode PD. The holes are diffused to a silicon substrate, and the electrons are accumulated in the junction area. If the drive transistor Dx of the source follower circuit is turned on by the accumulated charges and the selection transistor Sel is selected, the output voltage of the unit pixel is generated depending on variation in the voltage of the FD electrode, thus corresponding pixel information is output in the form of analog information.
However, if regular quadrilateral unit pixels are arranged as shown in FIG. 1, there is a problem in that, as the degree of integration increases to improve resolution, the length of the arrangement of the unit pixels increases, so that parasitic resistance and parasitic capacitance increase, and consequently the delay of a control signal increases, thus deteriorating sensitivity.
As a result, a honeycomb-shaped image sensor is used as a scheme for improving resolution while maintaining sensitivity, using the same regular quadrilateral unit pixels.
If the honeycomb-shaped image sensor is used, portions at which image data do not actually exist are formed at locations where horizontal and vertical lines intersect. Data for the locations are interpolated and virtual image data are inserted at the locations, so that resolution can be doubled without increasing the number of lines from which image data are read.
FIG. 3 is a view showing interpolation performed by the conventional honeycomb-shaped image sensor. FIG. 3 shows that regular quadrilateral pixels in even-numbered columns are arranged to be offset ½ pitch from regular quadrilateral pixels in odd-numbered columns so as to implement the honeycomb shape.
However, if regular quadrilateral pixels are arranged to be offset ½ pitch, there may occur a problem in which horizontal and vertical resolutions differ. In detail, horizontal resolution is twice vertical resolution.