This invention relates generally to the processing of digital video signals and in particular to color transformation.
In the human eye, color vision is provided by discrete receptors called cones, which are distributed across the central portion of the retina known as the fovea. There are three types of cones; each type is most responsive to light with wavelengths corresponding to one of the primary colors: red, green or blue. What is perceived as colored light is actually a mixture of light of different wavelengths, described by a spectral power distribution (SPD). Color reproduction uses the theory of tristimulus, which states that any color can be reproduced by stimulating the cones in the same way as the original SPD. A video camera captures a scene in separate image planes using filters of red, green and blue, and recording the intensity of each color. These intensities are labeled as R, G and B, respectively.
A cathode ray tube (CRT) is commonly used to display color images. The face of a CRT is covered with dots of phosphor that emit light in the range of the three primary colors when stimulated by an electron beam. Each phosphor dot emits a range of wavelengths and each cone responds in some degree to a range of colors, so it is not possible to manipulate the primary colors independently. Further, the phosphor dots have a non-linear response. To compensate for this non-linearity, the inverse of this response, known as a gamma response, is often applied in the video camera. A description of the gamma response is given in the International Telecommunications Union (ITU) specification ITU Rec. 709, for example. The gamma corrected video signals are labeled Rxe2x80x2, Gxe2x80x2 and Bxe2x80x2.
Early video signals were primarily used for monochromatic (black and white) television. For this application, only the total intensity of the light was required. When color television was introduced, it was desirable to maintain compatibility with older television receivers, so rather than transmitting the Rxe2x80x2, Gxe2x80x2 and Bxe2x80x2 signals, a total luminescence signal, Yxe2x80x2, and two color difference signals are used. The Rxe2x80x2, Gxe2x80x2 and Bxe2x80x2 signals can be calculated from these three signals if required. For example, in digital video we have
Yxe2x80x2=a.Rxe2x80x2+b.Gxe2x80x2+c.Bxe2x80x2
Cr=Sr.(Yxe2x80x2xe2x88x92Rxe2x80x2)+d
Cb=Sb.(Yxe2x80x2xe2x88x92Bxe2x80x2)+d,
where a, b, and c are the proportional responses of each color and Sr, Sb and d are scaling factors to keep the components within a specific range. These equations are known as the luma equations and can be written in matrix form as             [                                                  Y              xe2x80x2                                                                                          C                r                            -              d                                                                                          C                b                            -              d                                          ]        =          M      ⁡              [                                                            R                xe2x80x2                                                                                        G                xe2x80x2                                                                                        B                xe2x80x2                                                    ]              ,
where M is the 3xc3x973 matrix   M  =            [                                    a                                b                                c                                                                              S                r                            ⁡                              (                                  a                  -                  1                                )                                                                                        S                r                            ⁢              b                                                                          S                r                            ⁢              c                                                                                          S                b                            ⁢              a                                                                          S                b                            ⁢              b                                                                          S                b                            ⁡                              (                                  c                  -                  1                                )                                                        ]        .  
Different types of color reproduction devices will have different color-reproducing characteristics, called color spaces. In order to describe colors without reference to a particular device, a device-independent color space was defined by the 1931 CIE Standard Observer study. The study defined three mathematical primary X, Y and Z that can be used to define all observable colors. In this case, Y is the luminescence and X and Y are idealized chromaticity primaries. Chromaticity is usually defined in terms of the normalized coordinates       x    =                  X                  X          +          Y          +          Z                    ⁢              xe2x80x83            ⁢      and            y    =                  Y                  X          +          Y          +          Z                    .      
The characteristics of a device can then be specified if the x and y values of the device primaries are known.
In particular, these definitions and relationships make it possible to convert color video signals intended for one device into color video signals for another device.
Many digital video signals are configured so that they will reproduce accurate colors when viewed on a device that operates in accordance with ITU Recommendation 709. These signals are designed to compensate for the characteristics of a phosphor-based CRT. If a different display, such as a display that uses laser light sources, is to be used, the signals must be transformed. This transformation process requires a combination of linear and non-linear computations, which are described in more detail below. In order to perform these computations in real-time, a powerful and therefore expensive, signal processor is required.
Conventional color transformation schemes may be used to correctly map colors from one device to another. However, a laser light display has a much greater range (gamut) of colors than a traditional CRT display. In order to utilize this extended color gamut, a more complicated, possibly non-linear transformation is required. The computation may be avoided by use of a look-up table, but to store all combinations for 8-bit encoded colors would require 224 memory locations each holding 24 bits of information, i.e. 48 MB (MegaBytes) of memory. This size of memory is impractical. This use of a powerful video computer to perform these calculations is described in xe2x80x9cColorimetry in the TeraNex Video Computerxe2x80x9d, by Linc Brookes, TeraNex publication, 1999. The approach recommended in this publication is to define the non-linear warping of the color space between reference points. However, this requires thousands of multiplications for each pixel.
It is accordingly an object of the invention to provide a method and an apparatus for transforming video signals having reduced computation and memory requirements. The transformation may be linear or non-linear. Each pixel in the video image is represented by a vector of color components. According to the present invention, each vector of color components is transformed or warped to facilitate display on a specified device.
The transformation system of the invention uses a primary look-up table containing vectors of reference output components indexed by a table index, and a secondary look-up table containing matrices of coefficients indexed by the table index. The look-up table is stored in a memory. An index generator receives the vector of input components and generates a table index value corresponding to a reference input color and a vector of difference components representing the difference between said vector of input components and said reference input color. A correction calculator generates a vector of correction components dependent upon the vector of difference components and the matrix of coefficients indexed. The vector of reference output components are added to the vector of correction components to obtain a vector of transformed input components.
According to the method of the invention, a vector of input components (R,G,B) of an input color is transformed by generating a table index using the vector of input components, the table index corresponding to a reference input color (R0,G0,B0), retrieving a vector of reference output components from a primary look-up table using the table index, retrieving a matrix M of coefficients from a secondary look-up table using the table index, generating a vector of difference components (r,g,b) representing the difference between the vector of input components (R,G,B) and the reference input color (R0,G0,B0), calculating a vector of correction components F(r,g,b) using the vector of difference components and the matrix of coefficients, and adding the vector of correction components to the vector of reference output components to obtain a vector of final output components of an output color.