1. Field of the Invention
The present invention relates to an image coding control in an image coding apparatus for effectively processing a valid factor of a discrete cosine transform (DCT) coefficient after quantization. In particularly, the present invention relates to an image coding control for enabling a digital camera using a JPEG (Joint Photographic Expert Group) system to contain a fixed number of photographs in one disc.
2. Description of the Prior Art
In the past, Huffman coding has been used to reduce the volume of coding in a digital camera using the JPEG system. However, since Huffman coding has a variable length, the volume of coding varies depending on types of images. That is, a large compression ratio can be provided for a single color wall or sky, but cannot be provided for an image with a great variation. Therefore, the number of photographs which one disc can hold has varied depending on the images. Sometimes thirty photographs can be contained in one disc, and other times only five can be contained in one disc. In this kind of situation, it has been unknown how many discs were necessary when taking photographs, and in a situation such as a trip, many discs have had to have carried. The present invention provides an image coding apparatus for enabling discs to hold approximately the same number of photographs.
Before explaining the present invention, the conventional coding method is explained below. FIG. 18 shows a conventional image coding apparatus. The image coding apparatus of FIG. 18 comprises an input terminal 1 for inputting image data of 8.times.8 pixels, a discrete cosine transformer (DCT) 2, a zigzag converter 3, a quantizer 4, a quantizing table 5, an entropy coder 6, a coding table 7, and an output terminal for outputting a parameter or coding data.
The operation of the conventional coding method is explained below. First of all, image data, for example, a component image Pxy (x, y=0, 1, 2, 3, . . . , 7), is input from the data inputting terminal 1. The image data input is transmitted to the discrete cosine transformer 2. The discrete cosine transformer 2 performs a two dimensional discrete cosine transformation. As a result of the two dimensional discrete cosine transformation, 64(=8.times.8) coefficients Suv are obtained. The 64 coefficients are rearranged from a serial order to a zigzag order and transmitted to the quantizer 4. The quantizer 4 quantizes the 64 coefficients for each step size, which differs for every coefficient position, using the table 5. The 64 quantized coefficients are transmitted to the entropy coder 6. The entropy coder 6 performs Huffman coding using the coding table 7, and the coding data is output from the output terminal 8 in several byte units (for example, a 16-bit width).
The 8.times.8 pixel component image Pxy (x, y=0, 1, 2, 3, . . . , 7) is processed with a two dimensional DCT, and the coefficient Suv is obtained as the following formula (1): ##EQU1## Where, x, y=positions of pixels within a block
u, v=positions of the DCT coefficients ##EQU2##
The DCT coefficients Suv include S.sub.00 (DC (direct current)) components and other S.sub.01 -S.sub.77 (AC (alternating current)) components. S.sub.00 is the largest, and the AC components are extremely small compared with S.sub.00.
Secondly, the zigzag converter 3 converts the DCT coefficient Suv from a serial order to a zigzag order. Then the DCT coefficients Suv are input to the quantizer 4. The quantizer 4 divides the DCT coefficients using the value Quv of the quantizing table 5. That is, the quantized DCT coefficients Ruv are obtained in the following formula. EQU Ruv=round (Suv/Quv)
where, the round function converts the result of Suv/Quv to the closest integer. Therefore, by determining that the value Quv in the quantizing value table 5 is large compared to the AC coefficient, it is possible to make almost all the coefficients zero in the AC area.
Then, the 64 coefficients Ruv output from the quantizer 4 are transmitted to the entropy coder 6. Since the coding systems are different for a DC coefficient (R.sub.00) showing the average value of 8.times.8 pixels and for the other AC coefficients (all except for R.sub.00), the coding systems are explained separately.
First of all, a block diagram for grouping of the DC coefficients R.sub.00 is illustrated in FIG. 20. In FIG. 20, a block delaying portion 61 delays the preceding DC coefficient R.sub.00, and a subtractor 62 subtracts the delayed DC coefficient R.sub.00 from the preceding R.sub.00. The difference is transmitted to the grouping portion 63. In the operation of the subtractor 62, as shown in FIG. 21, the difference between the DC coefficient (DC i) of the current DC component block (i) and the DC coefficient (DC.sub.i-1) of the same colored component block (i-1) which is preceedingly coded is calculated, and the resultant difference (.DELTA.DC i ) is obtained. Except for special images, such as computer graphics, it is rare that the average value changes greatly between one block and an adjacent block. Therefore, the difference between a DC coefficient and the preceding DC coefficient centers around zero. Therefore, it is possible to expect better coding by coding the difference obtained in this manner.
The difference of the DC coefficient obtained by the above subtractor 62 is input to the grouping portion 63, and the group to which the difference value belongs is obtained using the table shown in FIG. 22. The output from the grouping portion 63 represents the group number (S) of the DC difference value and the added bit (A), where, the added bit (A) is a number indicating the order of the difference value within the group. For example, in the group 3 of FIG. 22, the added bit number is 3, and the DC difference value may take eight values such as -7, -6, -5, -4, 4, 5, 6, 7. Therefore, the added bit of 000 is assigned to -7, 001 for -6, 010 for -5, 011 for -4, 100 for 4, 101 for 5, 110 for 6 and 111 for 7. In this manner, the group number (S) and the added bit (A) are output from the grouping portion 63. The group number (S) and the added bit (A) are one dimensional Huffman coded by an one dimensional Huffman coder 65 shown in FIG. 26, as explained below.
FIG. 23 shows a block diagram for grouping the AC coefficients in the image coding apparatus. Since the AC coefficients are already rearranged by the zigzag converter such as shown in FIG. 24, they are output in the order of the zigzag sequence. When the judgment portion 92 determines that each AC coefficient is 0, the run length counter 93 counts the continuous number of the AC coefficients that are 0, and outputs the number as a run length (N).
When the AC coefficient is other than 0, the group number (S) and the added bit (A) are generated in the grouping portion 94 in the same method as that of when the DC difference is obtained. The group in which the AC coefficient belongs is obtained using the table shown in FIG. 25 where, the added bit is the value which represents the order of the AC coefficients within the group. For example, assuming that AC coefficient is 7, its group number is 3. In group 3, the added bit number is 3, and the DC difference value may take the eight values -7, -6, -5, -4, 4, 5, 6, 7. Therefore, the added bit of 000 is assigned to -7, 001 for -6, 010 for -5, 011 for -4, 100 for 4, 101 for 5, 110 for 6 and 111 for 7. In this manner, the grouping number (S) and the added bit (A) is output from the grouping portion 94.
The group number (S) output from the grouping portion 94 and the run length (N) output from the run length counter 93 are Huffman coded by a two dimensional Huffman coder part 95 of a Huffman coder 70 and an AC coding table 96 as explained below.
FIG. 26 shows a circuit diagram for grouping and Huffman coding of the DC coefficients and the AC coefficients. In FIG. 26, a DC grouping portion 60a is the same circuit as that shown in FIG. 20, and an AC grouping portion 60b is the same circuit as that shown in FIG. 23.
The Huffman coder 70 is explained below. In FIG. 26, the Huffman coder 70 comprises an one dimensional Huffman coding part 65, a DC coding table 66, a DC added bit coupling portion 67 for coding the DC coefficient, a two dimensional Huffman coding part 95, an AC coding table 96 and an AC added bit coupling portion 97 for coding the AC coefficient. A coupling portion 68 couples the DC coding signal and the AC coding signal.
Firstly, the coding of the DC coefficient is discussed below. The group number S obtained from the DC grouping portion 60a is one dimensional Huffman coded in the one dimensional Huffman coding part 65 using the DC coding table in the DC coding table 66, and output as a DC code.
One example of the one dimensional Huffman coding is shown in FIG. 27. For example, when the group number 3 is input into the one dimensional Huffman coding part 65, the one dimensional Huffman coding part 65 outputs the number 110 as a one dimensional Huffman coded DC code. The number 110 output from the one dimensional Huffman coding part 65 is coupled with the added bit 100 (4) output from the grouping portion 60a in the DC added bit coupling portion 67, and the result is 110100 (DC code+added bit).
Next,the coding of the AC coefficients is explained below. In FIG. 26, the group number S output from the grouping portion 60b and the run length N output from the run length counter 93 in the grouping portion 60b are Huffman coded by two dimensional Huffman coding part 95 using the AC coding table 96. The added bit A is added to the coded value in the AC added bit coupling portion 97, and output as the AC code. The two dimensional Huffman coding is explained below using a concrete example.
FIGS. 28A, 28B, and 28C show an example of coding the signal Ruv. The signal Ruv is input to the entropy coder 6 in FIG. 26. The signal Ruv comprises one DC coefficient R.sub.00, one AC coefficient AC1 (value=3), four invalid coefficients of value "0", one AC coefficient AC2 (value=10), and 57 invalid coefficients of value "0".
According to the above example, the first AC coefficient AC1 includes one zero invalid coefficient (Run Length=0) and one AC coefficient having value of 3 as shown in FIG. 28A. The zero invalid coefficient having value of 0 is input to the two dimensional Huffman coding part 95 as a run length (N=0). The following value "3" is also input to the two dimensional Huffman coding part 95. Since the value of the AC coefficient is 3, the group number S becomes 2 according to FIG. 25. Since the value 3 of the valid coefficient is the largest in the group, the added bit becomes 11. Therefore, in the two dimensional Huffman coder, the invalid coefficient (N) is 0, and the group number (S) is 2, as stated above. Therefore N/S becomes 0/2 as shown in FIG. 28B. Therefore, in the two dimensional Huffman coding part 95, the two dimensional Huffman coding is performed according to the AC coding table 96 (FIG. 29) corresponding to N/S (0/2), and then the added bit is added to the result, and the two dimensional Huffman coding signal "10011" is obtained as shown in FIG. 28C.
On the other hand, the next portion includes four invalid coefficients having value of 0 (Run length=4) and one AC coefficient AC2 (value=10) as shown in FIG. 28A. This portion produces the group number (S) of 4 and the added bit of 1010 (10). Therefore, the AC grouping portion 60b causes the N/S signal of 4/4 to be supplied to the two dimensional Huffman coding part 95 as shown in FIG. 28B. Then the two dimensional Huffman coding part 95 carries out the two dimensional Huffman coding using the AC coding table 96 (FIG. 29). Then the added bit is added to the result from the two dimensional Huffman coding part 95 at the AC added bit coupling portion 97 to obtain the two dimensional Huffman coding signal "11111111100110001010" as shown in FIG. 28C. For the remaining 57 invalid coefficients, the two dimensional Huffman coding part 95 produces the EOB (End of Block) code "00" which is added to the result of the preceding signals as shown in FIG. 28C. Then, the coding process is completed.
Therefore, as stated above, the Huffman coding system is a variable length coding system where the code length changes according to incoming data, and the volume of coding changes according to the original image.
Since the conventional image coding apparatus is constructed in the manner explained above, there has been a problem that the compression ratio varies depending on the image, and the coding volume (compressed data volume) also differs even if the image is the same.