(1) Field of the Invention
This invention relates generally to digital image processing and relates more particularly to a digital color imager and related methods having an improved luminance representation using hexagonal pixels comprising white and color pixels.
(2) Description of the Prior Art
Color is the perceptual result of light in the visible region of the spectrum, having wavelengths in the region of 400 nm to 700 nm, incident upon the retina. The spectral distribution of light relevant to the human eye is often expressed in 31 components each representing a 10 nm band.
The human retina has three types of color photoreceptors cone cells, which respond to incident radiation with somewhat different spectral response curves. Because there are exactly three types of color photoreceptor, three numerical components are necessary and sufficient to describe a color, providing that appropriate spectral weighting functions are used. These cones provide the “photopic” vision.
Photoreceptors are not distributed evenly throughout the retina. Most cones lie in the fovea, whereas rods dominate peripheral vision. Rods handle the short wavelength light, up to about 510 nm. The number of rods is much higher than the number of cones. They are very sensitive to very low levels of light. These rods provide the “scotopic” vision. There is no color in scotopic vision and it is processed as grayscale.
Since the human eye has more photoreceptors handling black and white compared to colors luminance is more important to vision as colors.
Pixel values in accurate gray-scale images are based upon broadband brightness values. Pixel values in accurate color images are based upon tristimulus values. Color images are sensed and reproduced based upon tristimulus values, whose spectral composition is carefully chosen according to the principles of color science. As their name implies, tristimulus values come in sets of three. In most imaging systems, tristimulus values are subjected to a non-linear transfer function that mimics the lightness response of vision. Most imaging systems use RGB values whose spectral characteristics do not exactly match the tristimulus values of the human eyes.
A combination of real world physical characteristics determines what the human vision system perceives as color. A color space is a mathematical representation of these characteristics. Color spaces are often three-dimensional. There are many possible color space definitions.
Digital cameras have either RGB representation (RGB in one pixel) or Bayer representation, wherein the pixels are arranged as shown in FIG. 1 prior art. In a 2×2 cell 1 are one red (R) pixel, one blue (B) pixel and two green (G) pixels.
Another color space is Hue, Saturation and Luminance (or HSL). In this color space scenes are not described in terms of red, green, and blue, but as hue, saturation, and luminance (HSL). We see things as colors, or hues that either have a washed-out look or have deep, rich tones. This means having low or high saturation, respectively. Hue is the attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors, red, green and blue, or a combination of them. Saturation is the colorfulness of an area judged in proportion to its brightness.
By color saturation control is meant the process to increase or decrease the amount of color in an image without changing the image contrast. When saturation is lowered the amount of white in the colors is increased (washed out). By adjusting the color saturation the same image can be everything from a black and white image to a fully saturated image having strong colors.
Usually different color spaces are being used to describe color images. YUV and YCbCr color spaces are getting more and more important.
The YUV color space is characterized by the luminance (brightness), “Y”, being retained separately from the chrominance (color). There is a simple mathematical transformation from RGB: Y is approximately 30% Red, 60% Green, and 10% Blue, the same as the definition of white above. U and V are computed by removing the “brightness” factor from the colors. By definition, U=Blue-Yellow, thus U represents colors from blue (U>0) to yellow (U<0). Likewise V=Red-Yellow, thus V represents colors from magenta (V>0) to Cyan (blue green) (V<0)
The YCbCr color space was developed as part of recommendation CCIR601. YCbCr color space is closely related to the YUV space, but with the color coordinates shifted to allow all positive valued coefficients:Cb=(U/2)+0.5Cr=(V/1.6)+0.5,wherein the luminance Y is identical to the YUV representation.
U.S. Pat. No. (6,441,852 to Levine et al.) describes an extended dynamic range imager. An array of pixels provides an output signal for each pixel related to an amount of light captured for each pixel during an integration period. A row of extended dynamic range (XDR) sample and hold circuits having an XDR sample and hold circuit for each column of the array captures an XDR signal related to a difference between the output signal and an XDR clamp level to which the pixel is reset at a predetermined time before the end of the integration period. A row of linear sample and hold circuits having a linear sample and hold circuit for each column of the array captures a linear signal related to a difference between the output signal and an initial output signal to which the pixel is reset at the beginning of the integration period.
FIG. 2 prior art shows a diagram of the relationship between illumination and the yield of electrons per pixel of a “normal” imager 21 and an XDR imager 20. It shows that the resolution of the XDR imager 20 is much higher in low illumination condition than the resolution of “normal” imagers. In case the illumination is higher than the XDR breakpoint 22 the additional yield of electrons is reduced significantly.
XDR enhances the performance especially in low-light conditions. The XDR APS also uses individual pixel addressing to reduce column overload, or “blooming”. The excess charge is absorbed in substrate and adjacent pixel drain regions.
Since a few years cameras are available having hexagonal pixels instead of square pixels. FIG. 3 prior art illustrates a typical arrangement of hexagonal RBG pixels. A key advantage of an arrangement of hexagonal pixels is that the distance between a given pixel and its immediate neighbours is the same. Furthermore hexagonal sampling requires 13% fewer samples than rectangular sampling. Principally an arrangement of hexagonal pixels models human visual system more precisely than square pixels. The cone distribution on the human fovea resembles more a hexagonal arrangement of pixels than a square arrangement of pixels.
Nevertheless, it is a challenge for the designers of digital imagers to achieve solutions providing images being almost equivalent to human vision.
There are patents or patent applications related to this area:
U.S. Pat. No. (6,642,962 to Lin et al.) describes a digital-camera processor receiving mono-color digital pixels from an image sensor. Each mono-color pixel is red, blue, or green. The stream of pixels from the sensor has alternating green and red pixels on odd lines, and blue and green pixels on even lines in a Bayer pattern. Each mono-color pixel is white balanced by multiplying with a gain determined in a previous frame and then stored in a line buffer. A horizontal interpolator receives an array of pixels from the line buffer. The horizontal interpolator generates missing color values by interpolation within horizontal lines in the array. The intermediate results from the horizontal interpolator are stored in a column buffer, and represent one column of pixels from the line buffer. A vertical interpolator generates the final RGB value for the pixel in the middle of the column register by vertical interpolation. The RGB values are converted to YUV. The vertical interpolator also generates green values for pixels above and below the middle pixel. These green values are sent to an edge detector. The edge detector applies a filter to the 3 green values and 6 more green values from the last 2 clock cycles. When an edge is detected, an edge enhancer is activated. The edge enhancer adds a scaled factor to the Y component to sharpen the detected edge. Color enhancement is performed on the U and V components. The line buffer stores only 4 full lines of pixels and no full-frame buffer is needed.
U.S. Patent (2003/0016295 to Nakakuki) discloses an invention, making it possible to display an image signal with as a high picture quality as would have been obtained with a solid image pick-up device having color filters arrayed in a mosaic pattern. The image signal obtained from the solid image pick-up device with a Bayer array of the three primary colors of R, G, and B is separated by a color separation circuit into R-color, G-color, and B-color signals. These color signals are attenuated by filters respectively at half a horizontal sampling frequency in order to suppress the occurrence of moire noise. The G-color filter circuit has a narrower attenuation bandwidth than that of the R-color filter circuit and the B-color filter circuit. These color signals thus filtered are adjusted in level at a white balance circuit and then mixed by addition at a mixer, thus generating a luminance signal. By narrowing the attenuation bandwidth of the G-color signal, the resolution can be kept high while suppressing the occurrence of moire noise.
U.S. patent application Publication (2004/0114047 to Vora et al.) describes a method providing for demosaicing an offset geometric array. The method comprises the step of arranging a plurality of sensors in an offset geometric array. The sensors have a defined geometric shape for each sensor and its respective sensor sample. Another step is moving first sample from the sensor in an odd row of the offset geometric array to an uppermost point of a geometric shape. A further step is moving second sample from the sensor in an even row of the offset geometric array to a point that is vertically aligned with the first sample from the odd row. The point is also contained within the same geometric shape as the first sample from the odd row.