The present technology of compressing and decompressing data is used for various equipment units in various fields, and is used in, for example, on-board equipment.
FIG. 1 is an example of loading an on-board image data compressing and decompressing device.
In FIG. 1, a plurality of cameras 2-1 through 2-6 are provided outside a vehicle 1, a plurality of monitors 3 and rear monitors 4-1 and 4-2 are provided in the vehicle 1, and these components are connected on an on-board LAN 5. Image data compressing devices 6-1 through 6-6 are respectively connected to the cameras 2-1 through 2-6, and image data decompressing devices 7-1 through 7-3 are connected to the monitor 3 and the rear monitors 4-1 and 4-2. The image data of the images shot by each of the cameras 2-1 through 2-6 and the image data of the images of a car navigation not illustrated in the attached drawings are compressed by the image data compressing devices 6-1 through 6-6, the compressed data transferred through the on-board LAN 5 is decompressed by the image data decompressing devices 7-1 through 7-3, and then displayed on the monitor 3 and the rear monitors 4-1 and 4-2.
It is necessary to meet the following requirements to compress and decompress moving picture data for vehicles.
(1) High Quality (An Original Image is to be of High Quality as a Natural Image and a CG (Computer Graphics) Image)
As the image information processed in a vehicle, there are natural images represented by common TV images, moving pictures, etc., and CG images (digital images) represented by maps of a car navigation system, etc. Generally, low-frequency components are mainly included in natural images while high-frequency components are mainly included in digital images. In recent on-board terminals and mobile terminals including mobile telephones, both digital images of maps, etc. and natural images of TV, movies, etc. are processed, and an effective data compressing system for both low- and high-frequency components is demanded to efficiently transmit both types of image data.
(2) Low Delay (Not Requiring a Long Time to Compress and Decompress Data for an On-Board Camera)
Image information for on-board use can be images from a peripheral monitor camera. To perform a real-time monitoring operation, a low delay is required to quickly perform a compressing and decompressing process.
(3) Lightweight Device (Small Circuit)
Picture information is transmitted normally by an on-board LAN. However, in a multicast transmission, a compressing and decompressing device is required for each LAN terminal. Therefore, each circuit is to be small.
(4) High-Speed Processing
Since 30 through 60 frames of data are transmitted and received in a second for moving pictures, data is to be compressed at a high speed per unit time. In particular, high resolution images have become widespread for high definition pictures, etc., and it is necessary to compress data at a higher speed.
The image compressing technology associated with the above-mentioned subjects is described below.
(1) JPEG, MPEG (Transfer Coding)
In the JPEG and the MPEG, a DCT (discrete cosine transfer) is performed on an original image, and an obtained DCT coefficient is quantized.
A DCT is a method of frequency-converting image data. Since human eyes are sensitive to low-frequency components (flat portions in an image), a natural image can be compressed at a high compression ratio to suppress picture degradation by finely quantizing a DCT coefficient of a low frequency while roughly quantizing a DCT coefficient of a high frequency.
However, since lines, letters, etc. of CG map images of pictures are considerably degraded in their high frequency components in this method, the method is not appropriate for compressing CG images.
FIG. 2 illustrates a coding method by the DCT used in the JPEG etc. as a prior art.
In the DCT, original image data is first frequency-converted so as to divide the data into high-frequency components and low-frequency components. Then, the low-frequency components are finely quantized and the high frequency components are roughly quantized. Thus, the image data can be compressed at a high compression ratio. However, the picture degradation of the high-frequency components of lines, letters, etc. remains in this compressing method.
As for the compression ratio and the circuit size, the two-dimensional correlation can be acquired by performing a converting and coding process on a block of 8×8 pixels in the JPEG, thereby realizing a high compression ratio (about 1/10). However, in this case, memory of at least 8 lines is required and the circuit becomes large. Furthermore, in the MPEG, a considerably high compression ratio (1/20 or more) can be expected because the correlation is acquired between frames, but the memory for holding data of 1 frame is required and the circuit becomes larger.
(2) JPEG-LS (Lossless)
A JPEG-LS is a compressing system capable of performing lossless compression on still image data. In this system, a reasonable level value is estimated by considering the edge in the vertical and horizontal directions on the basis of a MED (median edge detector, that is, a type of MAP and DPCM), and an estimation error is directly coded.
FIG. 3 illustrates a compressing system by the JPEG-LS as a prior art.
In the JPEG-LS, an estimating unit estimates a pixel X from pixels A, B, and C illustrated in FIG. 3. Then, an error (X-X′) between an estimated value X′ and a measured value X is obtained and coded, thereby performing data compressing.
Described below is the subject of the compressing technique by the JPEG-LS.
Subject (1): Difficult to Adjust Image Quality
Since the JPEG-LS is lossless compression, it is difficult to gradually degrade image quality during the lossy compression.
Since the propagation of a quantization error occurs in the direction of the line, and a next pixel is estimated on the basis of the pixel including the quantization error, the estimation accuracy becomes worse when the quantizing step is more coarse.
For example, in FIG. 4, when the data of the pixel A includes an error, the error propagates through the pixel X1 and the pixel X2. Therefore, the coarser the quantizing step, the worse the estimation accuracy becomes.
Subject (2): Very Difficult to Compress and Decompress Moving Pictures in Real Time
In lossy compression, estimation, quantization, and calculation of a decompressed pixel level value are executed for each pixel in the direction of the line. However, in compression by the JPEG-LS, the decompressed pixel level value of the immediately previous pixel is necessary to estimate the next pixel. Therefore, it is hard to perform high-speed processing.
For example, when a pixel is processed in the order illustrated in FIG. 5A, it is necessary to compress one pixel in the period (1 clock) in which one pixel is transmitted. However, if the processing time is 1 clock in each of the processes of estimating, quantizing, and calculating a decompressed pixel level value, 3 clocks are required to compress one pixel. Then, since the next pixel is estimated using the immediately previous pixel, the next pixel cannot be estimated until the immediately previous pixel is completely decompressed. Therefore, the compressing and decompressing processes are performed according to the timing as illustrated in FIG. 5B, and cannot be performed in real time.
(3) Hierarchical Coding or Sequential Regeneration Coding System (Patent Document 1, Patent Document 2)
To perform an estimation and coding system with high quality and at a high compression ratio (image quality adjustment), a hierarchical estimation method is frequently used. An example of hierarchical estimation in a prior art is described below with reference to FIG. 6.
1) A bit plane (a white and black image of 0 and 1 with the depth of each bit, that is, 8 planes for 8 bits) is generated. In FIG. 6, only the total of three planes, that is, the plane obtained in the extrapolating process at the top for simplicity, and two planes obtained in the subsequent interpolating process, are illustrated.
2) The pixels in each plane are hierarchically binary-coded. In and after the second hierarchical levels, the coding order and means are changed on the basis of the states of the coded peripheral pixels.
3) Depending on the situation, no pixels may be coded or decoded, but a simple average of four already decoded peripheral pixels is defined as a pixel level value of a non-decoded pixel.
When the hierarchical estimation is applied as is, the complicated procedure incurs a large circuit, and unreasonable cost-performance for LSI implementation. In addition, it requires a large buffer memory corresponding to a block line (for 5 lines in the case illustrated in FIG. 6), and generates a large circuit.                Patent Document 1: Japanese Laid-open Patent Publication No. 60-127875        Patent Document 2: Japanese Laid-open Patent Publication No. 10-84548        