1. Field of the Invention
Methods consistent with the present invention relate to color filter interpolation for acquiring a color image by means of an image acquired by an image sensor, and in particular, to color interpolation using a color correlation similarity and multi-direction edge information.
2. Description of the Related Art
Videos acquired by an image sensor are monochrome, so that three colors of red (R), green (G), and blue (B) are required at each pixel position in order to acquire a color video. To this end, most digital video apparatuses has a color filter disposed in front of an image sensor which allows only a specific frequency band to be transmitted in a visible light region so that the color video is acquired.
FIG. 1 is a view illustrating a video provided to a conventional digital video apparatus. Hereinafter, a flow of the video provided to the digital video apparatus will be described with reference to FIG. 1.
The video is provided to a color filter 102 via a lens 100. The color filter 102 provides the video having a specific frequency band among received videos. As shown in FIG. 1, the color filter 102 is composed of several regions, and each region allows only a video having the same frequency band as the frequency band of one color among R, G and B colors to be transmitted. The video transmitted by the color filter 102 is provided to an image sensor 104. The image sensor 104 converts the received video signal into an electrical video signal. FIG. 1 shows the video signal output from the image sensor 104.
Referring to FIG. 1, a right upper end illustrates the color filter 102 formed by the video having the same frequency band as the frequency band of R color, a right center end illustrates the color filter 102 formed by the video having the same frequency band as the frequency band of G color, and a right lower end illustrates the color filter 102 formed by the video having the same frequency band as the frequency band of B color.
In general, each pixel is composed of three channel values of R, G, and B for representing a color. However, each pixel which has transmitted the color filter has a pixel value composed of one channel value among three channel values of R, G, and B. That is, the right upper end of FIG. 1 corresponds to a video only having the R channel value, the right center end corresponds to a video only having the G channel value, and the right lower end corresponds to a video only having the B channel value. Hereinafter, the R channel value, the G channel value, and the B channel value will be collectively referred to as a color value for simplicity of description.
FIG. 2 illustrates an example of an arranged structure of the color filter of FIG. 1. In particular, FIG. 2 shows the arranged structure of Bayer pattern. Referring to FIG. 2, a video is composed of plurality of pixels, and each pixel represents only one color among R, G, and B colors. In particular, the number of G pixels is larger than the number of R or B pixels. That is, the number of the G pixels is equal to a sum of the number of the R pixels and the number of the B pixels. This is because that the G color is closest to a luminance component and people are most sensitive to the G color.
As such, one pixel is not represented by three colors of R, G, and B, but represented by only one color for the sake of reducing a cost of the digital video apparatus. That is, when one pixel is represented by three colors, the cost of the digital video apparatus increases. Accordingly, a channel value of the color representing a pixel or a channel value of a pixel adjacent to the pixel is employed for a color which is not represented. A method of acquiring the channel value of the color not represented by using the channel value representing the pixel or the channel value of the adjacent pixel is referred to as a color filter array (CFA) interpolation.
Hereinafter, the CFA interpolation will be schematically described with reference to FIG. 2. Referring to FIG. 2A, a video input by the Bayer pattern is divided into three planes. Three planes are R, G, and B planes. As described above, it can be seen that the number of pixels constituting the G plane is relatively larger than the number of the pixels of the R plane or the number of the pixels of the B plane. That is, the number of the R pixels included in the R plane and the number of the B pixels included in the B plane are four, respectively, however, the number of the G pixels included in the G plane is eight. Accordingly, a pixel which does not represent the color in the R, G, and B planes acquires a color value by means of the CFA interpolation. This is accomplished by operation (B). By carrying out the CFA interpolation, all pixels constituting the R, G, and B planes have R, G, and B channel values, respectively. In general, the CFA interpolation is carried out such that the interpolation is first carried out on the G plane and then carried out on the R and B planes. In addition, by means of operation (C), the CFA interpolation acquires one video using three planes. Hereinafter, the conventional CFA interpolation method will be described.
1) Gradient Based Method
The gradient based method carries out the interpolation on the G channel value of the G pixel in response to the edge pattern of the video. That is, an edge component of the pixel positioned in the vertical direction is measured, and an edge component of the pixel positioned in the horizontal direction is measured. The interpolation is carried out in consideration of the direction having the higher values of the edge components among measured values of the edge components in the horizontal and vertical directions. Hereinafter, a description will be given with reference to FIG. 3.
A case of interpolating the G channel values of the pixel 5 among pixels shown in FIG. 3 will be described. Edge components of the horizontal direction are first extracted using the channel values of pixel 3 (R), pixel 4 (G), pixel 5 (R), pixel 6 (G), and pixel 7 (R). And edge components of vertical direction are then extracted using the channel values of pixel 1 (R), pixel 2 (G), pixel 5 (R), pixel 8 (G), and pixel 9 (R). Equation 1 below corresponds to an example of extracting the edge components of the horizontal direction and the edge components of the vertical direction. Ra, Ga, and Ba (a is an arbitrary natural number) denoted in Equation 1 represent the channel values of the corresponding color at the respective pixels.ΔH=|G4−G6|+|2×R5−R3−R7|ΔV=|G2−G8|+|2×R5−R1−R9|  Equation 1
ΔH denotes the edge component of the horizontal direction, and ΔV denotes the edge component of the vertical direction. Equation 2 below corresponds to a case of carrying out the interpolation in consideration of the edge components of the horizontal direction and the edge components of the vertical direction.if(ΔH>ΔV), G5=(G2+G8)/2+(R5−R1+R5−R9)/4else if(ΔH<ΔV), G5=(G4+G6)/2+(R5−R3+R5−R7)/4else G5=(G2+G4+G6+G8)/4+(4×R5−R1−R9−R3−R7)/8   Equation 2
The gradient based method is advantageous in terms of sharpness and color fringe error as compared to the conventional interpolation method, however, still has many color fringe errors. Further, it estimates the edge direction using only a difference between edge components, so that the image quality is degraded due to frequent changes in the interpolation direction.
2) Constant Hue Based Interpolation
The constant hue based interpolation method has been proposed in consideration of the fact that the conventional interpolation method allows an abrupt change in the hue component to occur to thereby cause many color fringe errors. That is, the constant hue based interpolation method carries out the hue based interpolation after carrying out the conventional interpolation so that a performance of the color interpolation is enhanced. The constant hue based interpolation method assumes that a color rate between adjacent pixels (positional components: (x, y)) is same when the change in the hue component is small in the small region. Equation 3 below corresponds to a case of having the same color rate between adjacent pixels.Ry/Gy=Rx/Gx Ry=Gy×(Rx/Gx)   Equation 3
The constant hue based interpolation method acquires the G plane (c) consisting of G pixels in the Bayer pattern in response to the Bayer pattern, and acquires the G plane (f) by carrying out the conventional interpolation on the acquired G plane (c) as shown in FIG. 2. A color filter interpolation is then carried out on the R and B planes using the acquired G planes. Hereinafter, a method of determining R2 and B3 will be described with reference to FIG. 4. FIG. 4 shows the G plane where the interpolation has been carried out and R and B planes where the interpolation is not carried out. Equation 4 below corresponds to a case of determining the R2 and B3.R2=G2×{(R1/G1)+(R3/G3)}/2B3=G3×{(B2/G2)+(B4/G4)}/2   Equation 4
The constant hue based interpolation method can reduce the color fringe errors. However, it carries out the interpolation using the G plane, so that when a reliability about the interpolation result on the G plane becomes poor, a reliability about the interpolation on the R or B plane may also become poor, which may cause the color fringe errors to occur.
3) Pei's Method
Pei's interpolation method is a modified method of the constant hue based interpolation method, which uses R pixels of the R plane and B pixels of the B plane when an interpolation is carried out on the G plane. The Pei's interpolation method is superior to the gradient based method and the constant hue interpolation method in terms of color fringe error. However, it has a problem that zipper artifacts occur because it does not consider the edge information at the time of carrying out the G interpolation. Accordingly, a method capable of effectively carrying out the color interpolation is required.