At present, a known digital image lossless storage method mainly processes and stores all the pixels of all frames of the digital image completely without any omission. The major drawback of this method is its resulting huge stored information on the digital image and longer storing time. Moreover, since the bulk data of the stored digital image is so large, that is to say, the number of bytes of the storage file created to store a digital image is so big that it takes so long to transmit, read and display them; therefore, the said method has a low efficiency.
Since digital image processing mostly relates to two-dimensional information, it is inevitable to process a large number of information. For example, a black-and-white digital image with a low resolution 256×256 requires approximately 64 kbit data amount, whereas a color digital image with a high resolution 512×512 requires approximately 768 kbit data amount. If a video digital image sequence at 30 frame/second is processed, then it requires 500 kbit˜22.5 Mbit data amount. Therefore, digital image processing has a higher requirement for computing speed and storage capacity of a computer.
In addition, digital image processing usually occupies a wider frequency bandwidth that is a few orders of magnitude greater than the frequency band voice processing occupies. For instance, video digital images usually occupy a bandwidth of about 5.6 MHz, whereas voice only occupies a bandwidth of about 4 kHz. Therefore, it is difficult on technology and costly to implement imaging, transmitting, processing, storing, and displaying and other processing for the digital image, which thus leads to a higher demand for frequency band compression technique.
Pixel content or pixel values of all pixels of a digital image usually are highly correlated to each other. Taking a video image frame as an example, the correlation coefficient of two successive pixels in the same row or the pixels between two successive rows can reach up to above 0.9, and the correlation of the image contents of two successive image frames is generally much higher than the intra-frame correlation, which makes it possible to compress the image information by digital image processing technology. A known digital image lossless compression methods include Shannon-Fano encoding method, Huffman encoding method, Run-length encoding method, Lempel-Ziv-Welch (LZW) encoding and arithmetic encoding method, and so on. These methods mainly use statistical data redundancy to compress data, and can make original data recover without any distortion, but the statistical data redundancy limits the compression ratio of the data. Moreover, temporal redundancy (or “time-domain redundancy”) exists between the pixels of different image frames in an image sequence. When this type of the temporal redundancy is lossless compressed, it will create a large number of data and thus lead to a huge storage expense due to recording the movement vector of the pixels.