Nowadays, intensive research is being conducted on image filming apparatuses including surveillance cameras and video conferencing cameras. A typical image filming apparatus is equipped with a solid state image sensor (or a charge coupled device; hereinafter, will be abbreviated as a CCD) for receiving light from an object and generating electric signals (image signals) for output according to the amount of light received, the electric signals being transmitted externally as image data.
For better efficacy in the transmission of the image data, the image data normally is transmitted externally only after passing through an image data compression or other processing apparatus. The image data compression apparatus conducts a compression process on incoming image data according to a standard such as JPEG (Joint Photographic Expert Group) or MPEG (Moving Picture Expert Group) and transmits processed image data externally.
Conventional image filming apparatuses generate color difference signals Cr and Cb, which are coupled to the input of the image data compression apparatus, by temporarily converting image data obtained by filming into R (red), G (green), and B (blue) signals through a matrix calculation, and thereafter carrying out another matrix calculation on those color signals. Consequently, the inclusion of matrix calculation means in such apparatuses is essential to generate the R, G, and B signals, which results in increased complexity of the configuration of the apparatuses.
To solve the problem, Japanese Laid-Open Patent Application No. 7-67129/1995 (Tokukaihei 7-67129) discloses an image filming apparatus for directly generating color difference signals Cr and Cb without calculating R, G and B signals as follows. An image signal output of a CCD is separated into a luminance signal and a color signal by a Y/C separator circuit. The color difference signals Cr and Cb are then generated by conducting a single matrix calculation on these signals.
The arrangement obviates the need for matrix calculation means for generating the R, G, and B signals, making the overall configuration of the system less complex and facilitating the tuning when a multiplier determines a coefficient due to the decreased number of parameters for the matrix calculation. The following description will explain a system incorporating such an image filming apparatus and an image data compression apparatus for compressing the image data transmitted from the image filming apparatus.
As shown in FIG. 8, an image filming apparatus 51 includes a color complementary filter 52, a CCD 53, and a color separator circuit 54.
FIG. 9 shows an example of the configuration of the color complementary filter 52, in which Ma (magenta), Ye (yellow), Cy (cyan), and G (green) color filters are arranged in a predetermined pattern. The color complementary filter 52 is placed in front of the CCD 53 so that light from an object passes through the color complementary filter 52 before entering the CCD 53.
FIG. 11 shows another example of the configuration of the color complementary filter 52, in which W (white), Ye (yellow), Cy (cyan), and G (green) color filters are arranged in a predetermined pattern.
The CCD 53 receives light that travels from the object via the color complementary filter 52 and generates electric signals for output according to the amount of light received. The CCD 53 includes a light receiving element (pixel) for each color filter of the color complementary filter 52. The output of the light receiving element is coupled to the input of the color separator circuit 54 as image data. The method of reading the image data by the pixels of the CCD 53 is determined based on the color complementary filter 52 used.
The color separator circuit 54 is for generating a luminance signal Y and color difference signals Cr and Cb according to an electric signal output of the CCD 53. The principle in the generation of the luminance signal Y and the color difference signals Cr and Cb will be explained later. The luminance signal Y and the color difference signals Cr and Cb are coupled to the input of a block forming circuit 62 of an image data compression apparatus 61.
The image data compression apparatus 61 includes the block forming circuit 62, a DCT circuit 63 and a data compressing circuit 64.
The block forming circuit 62 is for dividing the luminance signal Y and the color difference signals Cr and Cb generated by the color separator circuit 54 into a plurality of blocks. For example, according to the JPEG standard, one block is constituted by eight horizontal signals and eight vertical signals, i.e. 8.times.8 signals (8.times.8 pixels) as a unit in each of the luminance signal Y and the color difference signals Cr and Cb.
The DCT circuit 63 is for conducting a discrete cosine transform (hereinafter, will be referred to as a DCT transform), which is a kind of orthogonal transform, on the luminance signal Y and the color difference signals Cr and Cb in a block. As a result, those signals are converted into data (DCT coefficients) on spatial frequency components. Generally, the DCT transform is represented by Equation 1. Note that in Equation 1, m and n denote the horizontal and vertical positions of the DCT coefficient respectively, and i and j denote the positions of the luminance signal Y and the color difference signals Cr and Cb respectively. ##EQU1##
where m and n represent the positions of the DCT coefficients. ##EQU2##
The data compressing circuit 64 is for quantizing each DCT coefficient generated by the DCT circuit 63 and carrying out a run length encoding and a Huffman coding after zigzag-scanning the quantized DCT coefficients.
The description below will explain how the system operates. First, let us define some terms used in the description for a clear and easy understanding of the description.
FIG. 10 schematically shows the light receiving plane of the CCD 53. Ma, Cy, Ye, and G represent pixels receiving light that has passed through the Ma, Cy, Ye, and G color filters of the color complementary filter 52. The coordinates of the pixel Ma at the top left corner is denoted as (0,0), and that of the pixel X distanced from the pixel Ma by p vertically and by q horizontally as (p,q). The output of the pixel X is denoted as X.sub.pq.
The luminance signal Y and the color difference signals Cr and Cb are obtained according to the outputs of four adjacent pixels. So let us presume that the luminance signal Y and the color difference signals Cr and Cb are outputted at the lattice point formed by the four pixels, and for convenience in explanation that the coordinates of the top left lattice point, surrounded by the pixels (p,q)=(0,0), (1,0), (1,1), and (0,1), are (0,0). Under the presumptions, the coordinates of the lattice point distanced from the lattice point (0,0) by i vertically and by j horizontally as (i,j), and the luminance signal Y and the color difference signals Cr and Cb outputted at that lattice point are denoted as the luminance signal Y.sub.ij and the color difference signals Cr.sub.ij and Cb.sub.ij.
If the color complementary filter 52 shown in FIG. 9 is used, the data produced by the pixels of the CCD 53 is read in a two line addition reading method, and the lattice points exist in every other horizontal row as shown in FIG. 10. By contrast, if the color complementary filter 52 shown in FIG. 11 is used, the data produced by the pixels of the CCD 53 is read in a total independent pixel reading method, and the lattice points exist in every horizontal row as shown in FIG. 12, unlike the case where the color complementary filter 52 shown in FIG. 9 is used.
With the configuration, as the light from the object passes through predetermined color filters of the color complementary filter 52 and enters CCD 53, the light receiving element of the CCD 53, having received the light, generates electric signals for output to the color separator circuit 54 according to the amount of light received. The color separator circuit 54 generates the luminance signal Y and the color difference signals Cr and Cb for output to the block forming circuit 62 according to the principle detailed below.
If the color complementary filter 52 shown in FIG. 9 is used, generally, the Ma, Cy, Ye, and G signals are expressed by Equations 2 using the R, G, and B signals: EQU Ma=R+B EQU Ye=R+G EQU Cy=G+B EQU G=G (Equations 2)
The luminance signal Y, and the color difference signals C.sup.1 and C.sup.2 are expressed by Equations 3 using the Ma, Cy, Ye, and G signals: EQU 7Y=Ma+Ye+Cy+G EQU C.sup.1 =Ma+Ye-Cy-G EQU C.sup.2 =Ma-Ye+Cy-G (Equations 3)
By substituting Equations 2 in the right sides of Equations 3, Equations 4 are obtained. EQU 7Y=2R+3G+2B EQU C.sup.1 =2R-G EQU C.sup.2 =2B-G (Equations 4)
From Equations 4, the G signal is expressed by Equation 5 using the luminance signal Y and the color difference signals C.sup.1 and C.sup.2 : EQU G=(7Y-C.sup.1 -C.sup.2)/5 (Equation 5)
The color difference signals Cr and Cb generally are expressed by Equations 6 using the luminance signal Y: EQU Cr=R-Y EQU Cb=B-Y (Equations 6)
Therefore, Equations 7 are obtained from Equations 4 and Equations 6. EQU Cr=(C.sup.1 +G)/2-Y EQU Cb=(C.sup.2 +G)/2-Y (Equations 7)
By substituting Equation 5 in Equations 7, Equations 8 are obtained. EQU Cr=(1/10-1/7).multidot.7Y+(1/2-1/10).multidot.C.sup.1 -(1/10).multidot.C.sup.2 EQU Cb=(1/10-1/7).multidot.7Y-(1/10).multidot.C.sup.1 +(1/2-1/10).multidot.C.sup.2 (Equations 8)
Finally, Equations 9 are obtained from Equations 3 and Equations 8. EQU Y=(Ma+Ye+Cy+G)/7 EQU Cr=(1/10-1/7).multidot.(Ma+Ye+Cy+G)+(4/10).multidot.(Ma+Ye-Cy--G)-(1/10).mu ltidot.(Ma-Ye+Cy-G) EQU Cb=(1/10-1/7).multidot.(Ma+Ye+Cy+G)-(1/10).multidot.(Ma+Ye-Cy-G)+(4/10).mul tidot.(Ma-Ye+Cy-G) (Equations 9)
Therefore, from Equations 9, the luminance signal Y.sub.01 and the color difference signals Cr.sub.01 and Cb.sub.01 are expressed by Equations 10: EQU Y.sub.01 =(Ma.sub.02 +Ye.sub.11 +Cy.sub.12 +G.sub.01)/7 EQU Cr.sub.01 .times.(1/10-1/7).multidot.(Ma.sub.02 +Ye.sub.11 +Cy.sub.12 +G.sub.01)+(4/10).multidot.(Ma.sub.22 +Ye.sub.32 -Cy.sub.31 -G.sub.21)-(1/10).multidot.(Ma.sub.02 -Ye.sub.11 Cy.sub.12 -G.sub.01) EQU Cb.sub.01 =(1/10-1/7).multidot.(Ma.sub.02 +Ye.sub.11 +Cy.sub.12 +G.sub.01)-(1/10).multidot.(Ma.sub.22 +Ye.sub.32 -Cy.sub.31 -G.sub.21)+(4/10).multidot.(Ma.sub.02 -Ye.sub.11 +Cy.sub.12 -G.sub.01)(Equations 10)
Consequently, the luminance signal Y.sub.01 and the color difference signals Cr.sub.01 and Cb.sub.01, which are outputs at the lattice point, are linear to the output of the pixels Ma, Cy, Ye, and G as shown in Equations 10. The luminance signal Y.sub.ij and the color difference signals Cr.sub.ij and Cb.sub.ij are expressed, for example, with matrixes E.sup.Y.sub.ijpq, E.sup.Cr.sub.ijpq, E.sup.Cb.sub.ijpq, and X.sub.pq by Equations 11: ##EQU3##
where i and j each are any one of the integral numbers from 0 through 7, and p and q each are any one of the integral numbers from 0 through 8.
Next, as the block forming circuit 62 forms 8.times.8 signal blocks out of the luminance signal Y.sub.ij and the color difference signals Cr.sub.ij and Cb.sub.ij, the DCT circuit 63 conducts the DCT transform expressed by Equation 1 on those signals in each block to convert them into spatial frequency components (DCT coefficients). Consequently, the spatial frequency components Y.sub.mn, Cr.sub.mn, and Cb.sub.mn of the luminance signal Y.sub.ij and the color difference signals Cr.sub.ij and Cb.sub.ij are expressed by Equations 12: EQU Y.sub.mn =F.sub.mnij .multidot.Y.sub.ij EQU Cr.sub.mn =F.sub.mnij .multidot.Cr.sub.ij EQU Cb.sub.mn =F.sub.mnij .multidot.Cb.sub.ij (Equations 12)
Thereafter, the data compressing circuit 64 quantizes the DCT coefficients generated by the DCT circuit 63, zigzag-scans the quantized DCT coefficients, carrying out a run length encoding and a Huffman coding, and transmits compressed data externally.
By contrast, if the color complementary filter 52 shown in FIG. 11 is used, generally, the Y, B, and R signals are expressed by Equations 13 using W, Cy, Ye, and G signals: EQU Y=(W+Ye+Cy+G)/8 EQU B=(W-Ye+Cy-G)/8 EQU R=(W+Ye-Cy-G)/8 (Equations 13)
If the total independent pixel reading method is adopted with the color complementary filter 52 shown in FIG. 11, the luminance signal Y is outputted at every lattice point, whereas the B and R signals are outputted respectively only at a lattice point with an even numbered j and an odd numbered j. So let us assume that at a lattice point with an odd numbered j, the same signal as the B signal calculated at the lattice point to its left (with an even numbered j) is outputted. In other words, it is assumed that B.sub.(i,j=2m+1) =B.sub.(i,j=2m) (m is an integral number). Similarly, let us assume that at the lattice points with an even numbered j, the same signal as the R signal calculated at the lattice point to its left (with an odd numbered j) is outputted. In other words, it is assumed that R.sub.(i,j=2n) =R.sub.(i,j=2n-1) (n is an integral number). Therefore, the signals Y.sub.01, B.sub.01 and R.sub.01 are expressed by Equations 14: EQU Y.sub.01 =(W.sub.12 +Ye.sub.01 +Cy.sub.11 +G.sub.02)/8 EQU B.sub.01 =B.sub.00 =(W.sub.00 -Ye.sub.01 +Cy.sub.11 G.sup.10)/2 EQU R.sub.01 =(W.sub.12 +Ye.sub.01 -Cy.sub.11 G.sub.02)/8 (Equations 14)
According to Equations 6 and 14, the luminance signal Y.sub.01 and the color difference signals Cr.sub.01 and Cb.sub.01 are expressed by Equations 15: ##EQU4##
Therefore, in this case also, the luminance signal Y.sub.01 and the color difference signals Cr.sub.01 and Cb.sub.01, which are the outputs at the lattice point, are linear to the signals from the pixels W, Cy, Ye, and G as expressed in Equations 15. Consequently, the luminance signal Y.sub.ij and the color difference signals Cr.sub.ij and Cb.sub.ij, which are the outputs at the lattice point (i,j), are generally expressed by Equations 11. The operations by the block forming circuit 62 and the data compressing circuit 64 are the same as in the case where the color complementary filter 52 is used.
Image filming and image compression of an object belong to different technical fields, so far having been developed separately from each other. As a result, connecting stand-alone devices was the only choice available to build a system incorporating both the technologies. Specifically, to build a comprehensive system with the image filming apparatus 51 and the image data compression apparatus 61, output of the image filming apparatus 51, i.e. the luminance signal Y and the color difference signals Cr and Cb, was coupled to input of the image data compression apparatus 61.
However, such a conventional system configuration has a problem in that the color separation process for the image filming apparatus 51 to generate the luminance signal Y and the color difference signals Cr and Cb takes time, which leads to an overall longer period of time required for the system to manage the processing and reduces the efficiency in external transmission of image data.
Also, since the image filming apparatus 51 needs to be provided therein with a space (capacity) to accommodate the color separator circuit 54, it is difficult to attempt to reduce the size of the combined system of the image filming apparatus 51 and the image data compression apparatus 61.
Moreover, as mentioned earlier, the configuration disclosed in Japanese Laid-Open Patent Application No. 7-67129/1995 generates the color difference signals Cr and Cb by separating image signal output of a CCD into a luminance signal and color signals with a Y/C separator circuit and then conducting a matrix calculation on the luminance and color signals. Therefore, the process up to the compression of the image data output of the CCD is complex and time-consuming.