1. Field of Invention
The present invention relates to an image processing technique, and more particularly to a static image compression method, a computer readable data structure, and a computer readable storage medium.
2. Related Art
In early times, the storage size of a digital image is very huge, and thus problems such as slow processing speed or inconvenience in carrying/transmission usually occur. Therefore, an “image compression technique” appears. Through the technique, the memory taken up by a compressed image is much smaller than the original image file. Besides, the original image can be restored through proper decompression.
To further improve the compression efficiency and space utilization, a “lossy” compression algorithm is put forward. The lossy image compression is developed mainly based on the sensitivity of the human eye, in which luminance details of a digital image are remained, and a large amount of color data is converted into a simpler mode, so as to save the space.
Joint Photographic Experts Group (JPEG) technique is a lossy compression standard method widely applied in image processing within a computer, in which an image is destructively compressed, and cannot be restored exactly after the compression, so the image inevitably has losses. Though a standard of progressive JPEG technique based on the JPEG technique and employing a lossless compression manner is provided, the progressive JPEG technique has not been widely promoted so far.
The JPEG technique adopts the concept of lossy coding. Firstly, an image is segmented into a collection of 8×8 sub-images (i.e., at the size of 8 pixel×8 pixel). Then, discrete cosine transform (DCT) is applied to each sub-image. After that, less important color parts of each sub-image are removed, and only the essential luminance information is remained, so as to achieve a high compression rate.
Skipping some trivial steps, the JPEG technique mainly compresses each sub-image by the following steps. Firstly, the DCT is applied to each sub-image data, and the data of sub-image are then quantized into integer numbers. Next, the quantized two-dimensional (2D) DCT coefficients are converted by zig-zag scan into a one-dimensional (1D) array. Then, the 1D array is coded according to a pre-defined Huffman coding table. The zig-zag scan makes many high-frequency zero values closely adjacent to each other, and the Huffman coding is performed to obtain a better compression rate, especially for consecutively adjacent zero values. Finally, a JPEG file is generated by JPEG syntax. The decompression method uses the inverse procedures to the compression algorithm, but as the quantization process is irreversible, the restored image is different from the original one, thus causing image loss.
However, the compressed image only displays computer readably binary data. The compressed image can not be displayed in the form that is visible and recognizable by human eyes. Thus, a current JPEG file must be completely decompressed in order to know the objects or recognizable content of the image.
As for digital cameras, due to the progressive technology, the images captured by a digital camera have increasingly higher resolution. The resolution gradually develops from the previous one or two mega pixels to nearly ten-mega pixels, and some more sophisticated digital cameras even support more than ten-mega pixels. However, as the pixel value is getting higher and higher, the image size after compression may be still too large. Taking an image with ten-mega pixels for example, the size of a file after compression is still close to 10 megabytes (MB), and the time needed for image processing is directly proportional to the size of the image. Under the circumstance that the content of an image is known only after complete decompression, more time is wasted in waiting for the image to be decompressed to show the actual content. Moreover, when a large number of images are to be processed, it takes plenty of time to browse the images in order to pick-up the desired ones.