1. Field of the Invention
The present invention relates to a method of and a system for recording image information in a transmitter of an information transmission apparatus, a recording apparatus having a disk or a magnetic tape as a storage medium, a disk manufacturing apparatus such as a stamper for an optical disk, or the like, and a method of and a system for encoding image information.
2. Description of the Related Art
Transmitters of information transmission apparatus, recorders of recording and reproducing apparatus having a disk or a magnetic tape as a storage medium, signal processors of disk manufacturing apparatus such as a stamper for an optical disk incorporate an encoder as shown in FIG. 1 of the accompanying drawings, for example. The encoder as shown in FIG. 1 is in accord with the moving picture image encoding standards for storage which have been established based on standardizing efforts made by the MPEG (Moving Picture Image Coding Experts Group).
FIG. 1 shows an internal structure of an image encoder.
As shown in FIG. 1, the image encoder has an input terminal 400 which is supplied with image data from a signal source (not shown). The input terminal 400 is connected to a first input terminal of a motion detector 421, an input terminal of a motion compensator 424, and an input terminal of a frame memory 422. The frame memory 422 has an output terminal connected to a second input terminal of the motion detector 421, an input terminal of a frame memory 423, an additive input terminal of an adder 427, an intraframe fixed contact “b” of a switch 428, and an input terminal of an inter-/intra-frame decision unit 429. The frame memory 423 has an output terminal connected to a third input terminal of the motion detector 421 and an input terminal of a motion compensator 425. The motion compensator 424 has an input terminal connected to an additive input terminal of an adder 426 which has a ½ multiplier therein. The motion compensator 425 has an input terminal connected to another additive input terminal of the adder 426. The adder 426 has an output terminal connected to a subtractive input terminal of the adder 427. The adder 427 has an output terminal connected to an inter-frame fixed contact “a” of the switch 428 and another input terminal of the inter-/intra-frame decision unit 429. The switch 428 has a movable contact “c” connected to an input terminal of a DCT (Discrete Cosine Transform) circuit 430 whose output terminal is connected to an input terminal of a quantizer 431. The quantizer 431 has an output terminal connected to an input terminal of a variable length coder 432 whose output terminal is connected to an input terminal of an output encoder 433. The output encoder 433 has an output terminal connected to an output terminal 434. The motion detector 421 has an output terminal connected to other input terminals of the motion compensators 424, 425 and another input terminal of the variable length coder 432. The frame memories 422, 423 and the inter-/intra-frame decision unit 429 are connected to a controller 435.
The frame memories 422, 423 read and write frame image data according to read/write control signals R/W which are supplied from the controller 435.
At the time frame image data have been stored in the frame memory 422, if the frame memory 422 outputs frame image data of a present frame, then the input terminal 4000 is supplied with frame image data of a future frame, and the frame memory 422 stores frame image data of a past frame. The present frame will be referred to as a “present frame”, the future frame as a “following frame”, and the past frame as a “preceding frame”.
The motion detector 421 effects a motion detecting process on each macroblock having a size of 16 lines×16 pixels, for example, with respect to frame image data supplied through the input terminal 400, frame image data read from the frame memory 422, and frame image data read from the frame memory 423. The motion detecting process may be a well known motion detecting process based on full-search block matching principles, for example.
Specifically, the motion detector 421 detects a motion with respect to macroblock data MB(f) of the present frame stored in the frame memory 422 and macroblock data MB(f+1) of the following frame supplied through the input terminal 400, and produces motion vector data MV based on the detected motion, and also detects a motion with respect to macroblock data MB(f) of the present frame stored in the frame memory 422 and macroblock data MB(f−1) of the preceding frame stored in the frame memory 423, and produces motion vector data MV based on the detected motion.
A single signal line is shown as being connected to the output terminal of the motion detector 421, and only one symbol “MV” is used to indicate motion vector data. Actually, however, the motion detector 421 produces in each of the above motion detecting cycles as many motion vector data MV as the number of all macroblocks of the frame image data stored in the frame memory 422.
Based on the motion vector data MV supplied from the motion detector 421, the motion compensator 424 extracts the macroblock data MB(f+1) which are closest to the macroblock data MB(f) to be processed of the present frame, from the frame image data of the following frame supplied through the input terminal 400, and supplies the extracted macroblock data MB(f+1) to the adder 426.
Based on the motion vector data MV supplied from the motion detector 421, the motion compensator 425 extracts the macroblock data MB(f−1) which are closest to the macroblock data MB(f) to be processed of the present frame, from the frame image data of the preceding frame stored in the frame memory 423, and supplies the extracted macroblock data MB(f−1) to the adder 426.
The adder 426 adds the macroblock data MB(f+1) from the motion compensator 424 and the macroblock data MB(f−1) from the motion compensator 425 and multiplies the sum by “½” with the ½ multiplier therein, thereby producing average data representing the average of the macroblock data MB(f+1) from the motion compensator 424 and the macroblock data MB(f−1) from the motion compensator 425.
The adder 427 subtracts the average data supplied from the adder 426 from the macroblock data MB(f) of the present frame supplied from the frame memory 422, thereby producing differential data between the macroblock data MB(f) of the present frame and the macroblock data represented by the average data produced by the bidirectional predictive process.
The inter-/intra-frame decision unit 429 connects the movable contact “c” of the switch 428 selectively to the inter-frame fixed contact “a” or the intra-frame fixed contact “b” thereof based on the differential data from the adder 427, the macroblock data MB(f) from the frame memory 422, and a frame pulse Fp supplied from the controller 435. The inter-/intra-frame decision unit 429 also supplies an inter-/intra-frame selection signal SEL indicative of a controlled state of the switch 428 to the controller 435. The inter-/intra-frame selection signal SEL is transmitted together with encoded image data as decoding information EDa to enable a controller of an image decoder which serves as a main unit for effecting a decoding process to switch between inter-/intra-frame data in the same manner as in the encoding process for decoding the image data.
Details of the image encoder shown in FIG. 1 are as follows: Image data to be encoded in terms of macroblocks are the frame image data of the present frame which are stored in the frame memory 422. The motion detector 421 detects motions in order to seek the macroblock data MB(f+1), MB(f−1) of the following and preceding frames which are closest to the macroblock data MB(f) of the present frame to be encoded. When the macroblock data MB(f+1), MB(f−1) of the following and preceding frames which are closest to the macroblock data MB(f) of the present frame are detected, motion vector data MV are produced. Using the motion vector data MV, the macroblock data MB(f+1), MB(f−1) of the following and preceding frames which are closest to the macroblock data MB(f) of the present frame are extracted so as not to transmit data which are in common with the previously transmitted data.
After the differential data are produced between the macroblock data MB(f) of the present frame and the macroblock data obtained according to the bidirectional predictive process by the adder 427, the macroblock data MB(f) of the present frame cannot be decoded merely based on the differential data. Therefore, the motion vector data MV are supplied to the variable length coder 432, and after the motion vector data MV are compressed by the variable length coder 421, the compressed motion vector data MV and the differential data are transmitted.
The inter-/intra-frame decision unit 429 serves to select either the encoding of the differential data or the encoding of the output data from the frame memory 422. The encoding of the differential data, i.e., the encoding of differential information between frames, is referred to as “inter-frame encoding”, and the encoding of the output data from the frame memory 422 is referred to as “intra-frame encoding”. The term “encoding” does not signify the differential calculation effected by the adder 427, but connotes the encoding process carried by the DCT circuit 430, the quantizer 431, and the variable length coder 432. The inter-/intra-frame decision unit 429 actually switches between the inter-/intra-frame encoding processes in terms of macroblocks. However, for an easier understanding of the present invention, it is assumed that the inter-/intra-frame decision unit 429 switches between the inter-/intra-frame encoding processes in terms of frames.
Image data of each of frames which are outputted from the switch 428 and encoded are generally referred to as an I picture, a B picture, or a P picture depending on how they are encoded.
The I picture represents one frame of encoded image data produced when the macroblock data MB(f) of the present frame supplied from the switch 428 are intra-frame-encoded by the DCT circuit 430, the quantizer 431, and the variable length coder 432. For generating an I picture, the inter-/intra-frame decision unit 429 controls the switch 428 to connect the movable contact “c” to the fixed contact “b”.
The P picture represents one frame of encoded image data that comprise inter-frame-encoded data of differential data between the macroblock data MB(f) of the present frame supplied from the switch 428 and motion-compensated macroblock data of an I or P picture which precede in time the macroblock data MB(f) of the present frame, and data produced when the macroblock data MB(f) of the present frame are intra-frame-encoded. For generating a P picture, the motion vector data MV used to effect a motion compensating process on the image data of the I picture are generated from image data to be encoded as a P picture and image data preceding the image data in the sequence of the image data supplied to the image encoder.
The B picture represents data produced when differential data between the macroblock data MB(f) of the present frame supplied from the switch 428 and six types of macroblock data (described below) are inter-frame-encoded.
Two of the six types of macroblock data are the macroblock data MB(f) of the present frame supplied from the switch 428 and motion-compensated macroblock data of an I or P picture which precede in time the macroblock data MB(f) of the present frame. Other two of the six types of macroblock data are the macroblock data MB(f) of the present frame supplied from the switch 428 and motion-compensated macroblock data of an I or P picture which follow in time the macroblock data MB(f) of the present frame. Still other two of the six types of macroblock data are interpolated macroblock data generated from I and P pictures which respectively precede and follow in time the macroblock data MB(f) of the present frame supplied from the switch 428 and interpolated macroblock data generated from P and P pictures which respectively precede and follow in time the macroblock data MB(f) of the present frame supplied from the switch 428.
Since the P picture contains encoded data using image data other than the image data of the present frame, i.e., inter-frame-encoded data, and also since the B picture comprises only inter-frame-encoded data, the P and B pictures cannot be decoded on their own. To solve this problem, a plurality of related pictures are put together into one GOP (Group Of Pictures) which is processed as a unit.
Usually, a GOP comprises an I picture or a plurality of I pictures and zero or a plurality of non-I pictures. For an easier understanding of the present invention, it is assumed that intra-frame-encoded image data represent an I picture, bidirectionally predicted and encoded image data represent a B picture, and a GOP comprises a B picture and an I picture.
In FIG. 1, an I picture is generated along a route from the frame memory 422 through the switch 428, the DCT circuit 430, the quantizer 431 to the variable length coder 432, and a B picture is generated along a route from the input terminal 400 through the motion compensator 424, the adder 426, the output terminal of the frame memory 423, the motion compensator 425, the adder 426, the adder 427, the switch 428, the DCT circuit 430, the quantizer 431 to the variable length coder 432.
The DCT circuit 430 converts the output data from the switch 428, in each block of 8 lines×8 pixels, from DC data into coefficient data of harmonic AC components. The quantizer 431 quantizes the coefficient data from the DCT circuit 430 at a predetermined quantization step size. The variable length coder 432 encodes the quantized coefficient data from the quantizer 431 and the motion vector data MV from the motion detector 421 according to the Huffman encoding process, the run-length encoding process, or the like. The output encoder 433 generates inner and outer parity bits respectively with respect to the encoded data outputted from the variable length coder 432 and the decoding information EDa from the controller 435. The output encoder 433 then adds the generated inner and outer parity bits respectively to the encoded data outputted from the variable length coder 432 and the decoding information EDa from the controller 435, thereby converting a train of data to be outputted into a train of data in a product code format. A synchronizing signal and other signals are also added to the train of data in the product code format.
Data contained in a GOP when it is outputted include decoding information, frame data of a B picture, decoding information, and frame data of an I picture, arranged successively in the order named from the start of the GOP.
The decoding information EDa comprises GOP start data indicating the start of the GOP, the inter-/intra-frame selection signal SEL referred to above, and other data. If the GOP start data have a value of “1”, then the GOP start data indicate that the frame data with the GOP start data added to its start are frame data at the start of the GOP. If the GOP start data have a value of “0”, then the GOP start data indicate that the frame data with the GOP start data added to its start are not frame data at the start of the GOP, but frame data at the start of a picture.
Operation of the image encoder shown in FIG. 1 will be described below.
For generating an I picture of a GOP, the inter-/intra-frame decision unit 429 controls the switch 428 to connect the movable contact “c” to the intra-frame fixed contact “b”. Frame image data read from the frame memory 422 are encoded by the DCT circuit 430, the quantizer 431, and the variable length coder 432. At this time, decoding information EDa is supplied from the controller 435 to the output encoder 433. To the encoded data from the variable length coder 432 and the decoding information EDa from the controller 435, there are added inner and outer parity bits by the output encoder 433, which then outputs an I picture.
For generating a B picture of a GOP, the inter-/intra-frame decision unit 429 controls the switch 428 to connect the movable contact “c” to the inter-frame fixed contact “a”.
The motion detector 421 detects a motion successively in the macroblock data MB(f) of the present frame and the macroblock data MB(f+1) in the frame image data of the following frame. As a result, the motion detector 421 selects the macroblock data MB(f+1) which is closest to the macroblock data MB(f) of the present frame are selected, and produces motion vector data MV indicative the position of the macroblock data MB(f+1) with respect to the macroblock data MB(f). Similarly, the motion detector 421 detects a motion successively in the macroblock data MB(f) of the present frame and the macroblock data MB(f−1) in the frame image data of the preceding frame. As a result, the motion detector 421 selects the macroblock data MB(f−1) which is closest to the macroblock data MB(f) of the present frame are selected, and produces motion vector data MV indicative the position of the macroblock data MB(f+1) with respect to the macroblock data MB(f).
The two motion vector data MV thus produced are supplied to the variable length coder 432 and also to the motion compensators 424, 425. The motion compensator 424 extracts the macroblock data MB(f+1) represented by the motion vector data MV, and supplies the extracted macroblock data MB(f+1) to the adder 426. The motion compensator 425 extracts the macroblock data MB(f−1) represented by the motion vector data MV, and supplies the extracted macroblock data MB(f−1) to the adder 426.
The adder 426 adds the macroblock data MB(f+1) from the motion compensator 424 and the macroblock data MB(f−1) from the motion compensator 425, and multiplies the sum by “½”, thereby averaging the macroblock data MB(f+1), MB(f−1). The average data from the adder 426 are supplied to the adder 427 through the subtractive input terminal thereof. The additive input terminal of the adder 427 is supplied with the macroblock data MB(f) of the present frame read from the frame memory 422. The adder 427 subtracts the average data from the adder 426 from the macroblock data MB(f) of the present frame. The adder 427 produces output data which are inter-frame-encoded by the DCT circuit 430, the quantizer 431, and the variable length coder 432. The encoded data are supplied to the output encoder 433, which adds the decoding information EDa and inner and outer parity bits to the encoded data, and outputs a B picture.
When all the macroblock data MB(f) stored in the frame memory 422 have been inter-frame-encoded in the manner described above, the frame image data stored in the frame memory 422 are read and supplied to the frame memory 423, and stored as image data of a previous frame in the frame memory 423. The frame memory 422 is now supplied with the image data of the next frame as the image data of the present frame.
The concept of the encoding process carried out by the image encoder will be described below with reference to FIG. 2 of the accompanying drawings.
FIG. 2 shows the frame image data of successive frames that are to be encoded which are denoted by respective frame numbers F1˜F10. Those frame image data which are shown hatched are frame image data I1, I3, I5, I7, I9 as I pictures, and those frame image data which are shown blank are frame image data B2, B4, B6, B8, B10 as B pictures (or frame image data P2, P4, P6, P8, P10 as P pictures). The frame image data I1, B2 of the frame numbers F1, F2 make up a GOP, the frame image data I3, B4 of the frame numbers F3, F4 make up a GOP, the frame image data I5, B6 of the frame numbers F5, F6 make up a GOP, the frame image data I7, B8 of the frame numbers F7, F8 make up a GOP, and the frame image data I9, B10 of the frame numbers F9, F10 make up a GOP.
Of the frame image data shown in FIG. 2, the frame image data I1, I3, I5, I7, I9 of the frame numbers F1, F3, F5, F7, F9 are read from the frame memory 422 and supplied through the switch 428 to the DCT circuit 430, the quantizer 431, and the variable length coder 432, which intra-frame-encode the supplied frame image data.
For encoding image data of a B picture, as indicated by the arrows in FIG. 2, frame image data on both sides of frame image data to be encoded, i.e., frame image data of frames which precede and follow the frame image data to be encoded, are used to inter-frame-encode the image data. For example, the frame image data I1, I3 of the frames which precede and follow the frame image data of the frame number F2 are used to encode the frame image data of the frame number F2, and the frame image data I3, I5 of the frames which precede and follow the frame image data of the frame number F4 are used to encode the frame image data of the frame number F4.
For example, for encoding the frame image data B2 of the frame number F2, the frame image data B2 are stored as the frame image data of the present frame in the frame memory 422 shown in FIG. 1. At this time, the frame memory 423 stores the frame image data I1 of the frame number F1 as the frame image data of the preceding frame. When the frame image data B2 start being encoded, the frame image data I3 of the frame number F3 are supplied as the frame image data of the following frame through the input terminal 400.
The motion detector 421 detects a motion with respect to the macroblock data MB(f) of the frame image data B2 of the frame number F2 which are read from the frame memory 422 and the macroblock data MB(f−1) of the frame image data I1 of the frame number F1 which are read from the frame memory 423, and, as a result, produces one set of motion vector data MV. The motion detector 421 detects a motion with respect to the macroblock data MB(f) of the frame image data B2 of the frame number F2 which are read from the frame memory 422 and the macroblock data MB(f+1) of the frame image data I3 of the frame number F3 which are supplied from the input terminal 400, and, as a result, produces one set of motion vector data MV.
The motion compensator 424 extracts the macroblock data MB(f−1) of the frame image data I1 of the frame number F1 which are indicated by the motion vector data MV. The motion compensator 425 extracts the macroblock data MB(f+1) of the frame image data I3 of the frame number F3 which are indicated by the motion vector data MV. The macroblock data MB(f−1), MB(f+1) which are extracted respectively by the motion compensators 424, 425 have their contents, i.e., their arrangement of the levels of pixel data in the macroblocks, closet to the macroblock data MB(f) of the frame image data B2 of the frame number F2.
The adder 426 adds the macroblock data MB(f−1) of the frame image data I1 of the frame number F1 from the motion compensator 424 and the macroblock data MB(f+1) of the frame image data I3 of the frame number F3 from the motion compensator 425 and multiplies the sum by “½” with the ½ multiplier therein, thereby producing average data representing the average of the two macroblock data MB(f−1), MB(f+1). The average data are supplied from the adder 426 to the adder 427 through the subtractive input terminal thereof.
The adder 427 is also supplied with the macroblock data MB(f) of the frame image data B2 of the frame number F2 through the additive input terminal thereof. The adder 427 thus subtracts the average data from the macroblock data MB(f) of the frame image data B2 of the frame number F2, producing differential data. The produced differential data are supplied through the switch 428 to the DCT circuit 430, the quantizer 431, and the variable length coder 432, which encode the differential data. The above process is effected on all the macroblock data MB(f) of the frame image data B2 of the frame number F2, thereby inter-frame-encoding the frame image data B2 of the frame number F2. The frame image data B4, B6, B8, F10 of the frame numbers F4, F6, F8, F10 are similarly inter-frame encoded.
The concept of a decoding process will be described below with reference to FIG. 2. FIG. 2 shows the frame image data to be decoded of successive image frames which are denoted by respective frame numbers F1˜F10. Those frame image data which are shown hatched are frame image data as I pictures, and those frame image data which are shown blank are frame image data as B pictures (or frame image data as P pictures).
Of the frame image data shown in FIG. 2, the frame image data I1, I3, I5, I7, I9 of the frame numbers F1, F3, F5, F7, F9 are decoded by the image decoder and then outputted as reproduced image data.
As indicated by the arrows in FIG. 2, frame image data as a B picture are decoded using frame image data on both sides of the frame image data to be decoded, i.e., frame image data of frames which precede and follow the frame image data to be decoded. For example, the frame image data I1, I3 of the frames which precede and follow the frame image data B2 of the frame number F2 are used to decode the frame image data B2 of the frame number F2.
For example, for decoding the frame image data B2 of the frame number F2, the frame image data I1 of the frame number F1 as an I picture and the frame image data I3 of the frame number F3 as an I picture are used to decode the frame image data B2 of the frame number F2. The decoding process employs the motion vector data which have been produced by the motion detection with respect to the frame image data B2 of the frame number F2 and the frame image data I1 of the frame number F1, and also the frame image data B2 of the frame number F2 and frame image data I3 of the frame number F3.
The macroblock data indicated by the motion vector data are extracted from the frame image data of the frame number F1, and the macroblock data indicated by the motion vector data are extracted from the frame image data of the frame number F3. These macroblock data are added to each other, and averaged to produce average data by being multiplied by the coefficient “½”. The differential data of the frame image data B2 of the frame number F2 and the average data are added to each other, thereby restoring the macroblock data of the frame image data B2 of the frame number F2.
The above compression encoding process is employed when digital video data are recorded on magnetic tapes, optical disks such as CD-ROMs, and hard disks. For compressing and encoding moving image data of a long period of time, such as movie image data and recording all the compressed and encoded moving image data on such a storage medium, it is necessary that the amount of all image data to be recorded which have been compressed and encoded be equal to or smaller than the remaining amount of data available after the decoding information EDa and the parity bits are removed from the amount of all data that can be recorded on the storage medium.
For example, CD-ROMs are mass-produced by a stamper as a master. Such a stamper is manufactured by the following manufacturing steps:
1. A glass substrate is coated with a resist material, forming a resist film on the glass substrate.
2. Digital video data which have been compressed and encoded that are carried by a laser beam emitted from a laser beam source are applied to the resist film.
3. Only the area of the resist film to which the laser beam has been applied is removed by development.
4. A melted resin such as polycarbonate or the like is flowed onto the resist film on the glass substrate.
5. After the resin layer is hardened, it is peeled off the glass substrate.
6. The irregular surface of the resin layer is plated by electroless plating, so that a plated layer is formed on the irregular surface of the resin layer.
7. The plated layer is then plated with a metal such as nickel or the like, so that a metal plated layer is formed on the plated layer on the irregular surface of the resin layer.
8. The resin layer is then peeled off the plated layer on the irregular surface of the resin layer.
The remaining plated layer after the resin layer is peeled off serves as the stamper.
Unlike hard disks and magnetooptical disks, digital video data are compressed, encoded, and recorded on optical disks such as CD-ROMs when the above stamper is manufactured. If the amount of all compressed and encoded image data to be recorded is smaller than the amount of all image data that can be recorded on the glass substrate, then all the compressed and encoded image data are recorded on the glass substrate, only leaving a blank area free of any recorded digital video data in the recordable area of the glass substrate. However, if the amount of all compressed and encoded image data to be recorded is greater than the amount of all image data that can be recorded on the glass substrate, then some of all the compressed and encoded image data to be recorded are not recorded on the glass substrate.
Storage mediums such as magnetooptical disks, hard disks, or the like where data can be recorded repeatedly in the same storage area can remedy the above problem by recording the data again on the storage medium though it results in an expenditure of additional time. However, storage mediums such as CD-ROMs which are mass-produced by one or more stampers cannot alleviate the above drawback unless a stamper or stampers are fabricated again, resulting in a much greater expenditure of time and expenses. Once CD-ROMs mass-produced by a stamper or stampers that are fabricated from a glass substrate which misses some of all the compressed and encoded image data to be recorded are on the market, the CD-ROM manufacturer has to collects those CDROMs from the market.
According to one conventional solution, a single quantization step size capable of recording all image data to be recorded on a storage medium is determined based on the amount of all image data to be recorded and the storage capacity of the storage medium, and the data of the quantization step size are supplied to a quantizer when the image data are recorded on the storage medium. Stated otherwise, the quantization step size in the quantizer 431 in the image encoder shown in FIG. 1 may be set to a predetermined quantization step size. In this manner, all image data to be recorded can reliably be recorded on a storage medium.
Moving image data vary at different degrees from frame to frame. Moving objects in moving images have various complex moving patterns including simple translation, different moving speeds, different moving directions, changes in moving directions per unit time, changes in the shape of moving objects, etc. If the moving pattern of a moving object is not simple translation, then when macroblock data closest to macroblock data in frame image data to be encoded of a present frame are extracted from frame image data of a preceding or following frame using motion vector data produced as a result of the detection of a motion by the motion detector 21 shown in FIG. 1, a pattern of the levels of pixel data in the extracted macroblock data of the preceding or following frame is not greatly different from a pattern of the levels of pixel data in the macroblock data of the present frame.
Therefore, when the amount of differential data produced by subtracting the average data of the macroblock data of the preceding and following frame from the macroblock data of the present frame is not greatly different from the amount of the macroblock data of the present frame. Specifically, when frames of moving image data are observed, since the moving image data do not vary constantly from image to image, the amount of data produced in each macroblock, each frame, and hence each GOP is not constant.
Therefore, the moving image data, the amount of which produced in each macroblock, each frame, and hence each GOP is not constant, are always quantized at a constant quantization step size. When the amount of differential data from the adder 427 shown in FIG. 1 is large, the DCT circuit 430 produces many types of coefficient data, but such coefficient data are quantized roughly at the single quantization step size by the quantizer 431. Conversely, when the amount of differential data from the adder 427 shown in FIG. 1 is smaller, the DCT circuit 430 produces fewer types of coefficient data, but such coefficient data are quantized finely at the single quantization step size by the quantizer 431.
For example, it is assumed that when the amount of differential data is large, “20”, types of coefficient data are produced, and when the amount of differential data is smaller, “4” types of coefficient data are produced, and that the quantization step size is “4”. When the amount of differential data is large, the coefficient data are quantized at the quantization step size of “4”, even though there are “20” types of coefficient data. When the amount of differential data is smaller, the coefficient data are quantized at the quantization step size of “4” even though there are only “4” types of coefficient data. Accordingly, when the amount of information is large, it is quantized roughly, and when the amount of information is smaller, it is quantized finely. Since the information cannot be quantized appropriately depending on the amount thereof, the quality of an image restored from an image which contains a large amount of information, in particularly, is poor.
There has been a demand for a method of and a system for quantizing image data appropriately depending on the amount of differential data, and recording all image data reliably on a storage medium.