1. Field of Invention
Example embodiments of the present invention relate to an image sensing device and an image data processing method using the same.
2. Description of Related Art
An image sensor refers to a semiconductor device which converts an optical signal into an electric signal. The image sensor is categorized into a charge coupled device (CCD) image sensor and a complementary metal oxide semiconductor (CMOS) image sensor. The CCD image sensor typically comprises individual MOS capacitors that are disposed very close to each other where charge carriers are stored in the MOS capacitors. As for the CMOS image sensor, a pixel array including MOS transistors and photodiodes, a control circuit, and a signal processing circuit are integrated into a single chip.
The CCD image sensor has disadvantages in that a driving scheme is complicated, a large amount of power is dissipated, and a fabricating process is complicated because a large number of mask process steps are required. It is also difficult to achieve a one-chip implementation because a signal processing circuit cannot be integrated into a CCD chip. However, the CMOS image sensor reproduces an image by forming a photodiode and MOS transistors within a unit pixel and sequentially detecting signals using a switching scheme. The CMOS image sensor has advantages in that power dissipation is reduced and fabrication typically requires the use of fewer masks as compared to the CCD fabrication process, thereby improving fabrication efficiency. In addition, a one-chip implementation can be achieved since several signal processing circuits and pixel arrays can be integrated into a single chip. Hence, the CMOS image sensor is considered as a next-generation image sensor.
In general, an image sensor includes a pixel array, which receives external incident light and converts photoelectric charges into electric signals, and micro lenses which are arranged in pixels on the pixel array. In addition, the image sensor includes a logic circuit which processes light sensed through the pixel array into electric signals and converts the electric signals into data.
Typically the pixel array is configured with a Bayer pattern which is most widely used. The Bayer pattern typically comprises an arrangement of 50% green pixels, 25% red pixels, and 25% blue pixels. The red pixels and the green pixels are alternately disposed in a single line, and the blue pixels and the green pixels are alternately disposed in a next line. Thus, one pixel carries information about only one color, for example red, blue or green. Since every pixel datum typically has to information about three colors to implement an image, information about the two colors not provided by a target pixel has to be extracted from pixels neighboring the target pixel. This process is generally called interpolation.
Since the information about the two neighboring colors extracted by the values arrived at by the interpolation process is not the real image data, an implemented image has some difference with a real image. For instance, if an interpolation process regarding a boundary between a dark region and a bright region in an image is implemented, interpolated data adjacent the boundary may contain error the interpolation process may arrive at very different information from the real information that is associate with the view of the boundary area. If an interpolation process with a green data in a bright region and another green data in a dark region is implemented to extract green data for a blue pixel or a red pixel, the extracted green data may have wrong information. Since correct information regarding brightness is very important to make a good quality image, the above problem can be serious.