1. Field
The disclosed embodiments relate generally to color interpolation.
2. Background
A digital image capture device, such as a digital camera or cell phone that has the ability to capture a digital image, includes an image sensor. The image sensor includes a two dimensional array of sensors. Each sensor is said to be located at a “pixel location.” Each sensor detects the intensity of one color of light. Typically, there are sensors for green, sensors for red, and sensors for blue.
FIG. 1 (Prior Art) is a diagram of an image sensor whose sensors are arranged in a popular pattern called the Bayer pattern. Due to the way humans visualize colors and images, it is often advantageous for the image sensor to subsample red and blue with respect to the sampling of green. Note that the uppermost row of sensors includes green and red sensors, in alternating fashion. The next row of sensors includes green and blue sensors, in alternating fashion. This ordering of sensors occurs row by row, vertically down the rows of the image sensor. Accordingly, the image sensor of FIG. 1 has more green sensors than it does red sensors or blue sensors. Red and blue are therefore said to be subsampled. Only one color sample value is taken at each pixel location on the image sensor. Each color sample value may, for example, be an eight-bit value.
FIG. 2 is a simplified diagram of a display usable to render an image captured using the image sensor of FIG. 1. For each sensor in FIG. 1, there are three sub-pixel values displayed on the display in FIG. 2. There is a green sub-pixel value, a red sub-pixel value, and a blue sub-pixel value. Each sub-pixel color value may, for example, be an eight-bit value. Because only one color is sampled at each pixel location in the Bayer pattern image sensor, two additional sub-pixel color values need to be determined for each pixel location in order to have all three sub-pixel values at each pixel location. The sub-pixel values that need to be determined are said to be “missing” sub-pixel values. The process of recreating the missing color sub-pixel values is called “demosaicing” or “color interpolation”.
FIG. 3 (Prior Art) illustrates one simple conventional color interpolation method. Suppose the current pixel location is location R33. Assume that the sub-pixel value for green is to be determined for location R33 because the Bayer pattern image sensor only captured a red sample value at location R33. Immediately above pixel location R33, immediately below location R33, immediately to the left of R33, and immediately to the right of R33 are pixel locations with green sub-pixel sample values. To determine the green sub-pixel value for pixel location R33, the neighboring green sub-pixel sample values G23, G32, G34, and G43 are averaged. The result is an estimate of what the green sub-pixel value should be at red pixel location R33.
FIG. 4 (Prior Art) illustrates how the blue sub-pixel value for pixel location R33 is determined. Note from FIG. 1 that the Bayer pattern image sensor generates blue sub-pixel sample values at the neighboring diagonal pixel locations B22, B24, B42 and B44. These neighboring diagonal blue sub-pixel sample values are averaged to get an estimate of what the blue sub-pixel value should be at red pixel location R33.
FIG. 5 (Prior Art) illustrates a problem with the color interpolating method of FIGS. 3 and 4. Suppose the image to be displayed includes an object 1 of solid red. Object 1 is disposed on a background of solid blue. Object 1 therefore creates a diagonal edge 2 that runs across the area of pixel location R33. There is solid blue above and to the left of edge 2. There is solid red below and to the right of edge 2. If the nearest neighboring four diagonal blue pixel values were just averaged as set forth above in FIG. 4 to interpolate the missing blue sub-pixel value at location R33, then the color values on either side of edge 2 would be averaged. This would result in a mixing of the large blue values on one side of the edge with the small blue values on the other side of the edge. Performing color interpolation using this mixing would reduce the sharpness of the edge.
To remedy this problem, it is common to obtain metrics of the amount of vertical edge there is at a pixel location and of the amount of horizontal edge there is at a pixel location. If an edge is determined to be present, then the metrics are used to estimate the orientation of the edge. Once an estimate of the orientation of the edge is made, then an appropriate interpolation function is chosen that does not result in undesirable mixing across the edge.
Suppose, for example, that a metric is obtained of how much edge there is in the vertical dimension and that another metric is obtained of how much edge there is in the horizontal dimension. If the metrics indicate that there is more vertical edge than horizontal edge, then an interpolation function is applied that does not average pixel values in the horizontal dimension. Averaging across the vertically extending edge therefore tends to be small. Similarly, if the metrics indicate that there is more horizontal than vertical edge, then an interpolation function is applied that does not average pixel values in the vertical dimension. Averaging across the horizontally extending edge therefore tends to be small. By using the metrics to choose the appropriate interpolation function, edge sharpness in the final image is maintained.
Different types of metrics can be used. One example of a metric is a first order gradient. An example of a first order horizontal gradient might involve subtracting the pixel value to the left from the pixel value of the adjacent pixel value to the right. If this difference value is zero, then no horizontal gradient is detected in the horizontal dimension between the two pixel locations. If the value is large, then a large horizontal gradient is detected. Such a first order gradient detects a change in pixel values in a string of pixel values extending in a direction. First order gradients are therefore usable as metrics to detect edges. A first order gradient for the vertical dimension can be obtained, and a first order gradient for the horizontal dimension can be obtained. These two first order gradients are then used to make a determination of whether an edge is present and if so, what the orientation of the edge is.
Another example of a metric is a second order gradient. A second order gradient involves detecting not the changes in pixel values in a direction, but rather involves detecting how the change in pixel values is changing. The differences between successive pixels are taken extending in a direction. If the magnitudes of these successive difference values do not change, then the second order gradient in the direction is zero. If, on the other hand, the magnitudes of the difference values change, then there is a second order change in the direction. A second order gradient can, for example, be used to eliminate constant changes from the determination of whether an edge is present.
In addition to the first and second order gradients described above, other vertical metrics and horizontal metrics are also used in conventional color interpolation. But regardless of the type of metric, metrics are generally used to select a single best interpolation function. This is undesirable in certain situations because the other interpolation function may be almost as good as the interpolation function chosen. Consider, for example, the situation in which a metric of ten is obtained in the vertical dimension and a metric of nine is obtained in the horizontal dimension. The two metrics are close to one another, yet only the vertical interpolation function is used because it is determined to be better than the horizontal interpolation function.
Another technique involves determining a dominant orientation of directional energy in an area surrounding a pixel to be interpolated. U.S. Pat. No. 6,404,918 describes a method wherein a neighborhood of pixels is considered. The interpolated pixel value is a weighted sum, where each neighborhood pixel is multiplied by its own weighting factor. Each weighting factor is determined by taking a vector dot product of a vector to the neighborhood pixel and the dominant orientation vector. Performing a vector dot product computation generally involves performing a multiply. Many multiply operations are required to calculate just one interpolated pixel value. The dominant orientation vector method is therefore undesirable in certain applications. The computational complexity may, for example, require additional hardware in order for all the needed computations to be performed in an available amount of time. Furthermore, performing the many computations may consume a non-trivial amount of energy. In a battery-powered consumer device such as a cell phone, extending battery life and reducing complexity and cost are often principal concerns. A solution is desired.