The performances of modern HPC systems make it possible to process very large volumes of data.
As it is known in the art, such performances depend notably on the computing power which, according to the Moore's law, doubles approximately every 18 months. However, the computing power is not the only criterion defining the performances of an HPC system. Particularly, the speed rate of I/O processing (reading input data and writing output data) between operating processors and a file system should also be considered.
Indeed, comparing to the computing power growing, the speed rate of the I/O processing grows much slower. Hence, in the actual HPC systems, data processing is slowed down rather by the I/O processing than by the computing power.
To solve this issue, one of the technique proposed by the art consists to use data compression while I/O processing.
Particularly, according to this technique, data is compressed using existing compression routines so as overall I/O processing time can be considerably accelerated.
There are mainly two classes of data compression methods: lossless data compression and lossy data compression. With lossless data compression, all information inside the data is preserved but with the cost of very low compression ratio. With lossy data compression, a higher compression ratio is usually obtained but the user needs to accept the loss a certain level of accuracy of the data. For this reason, lossy data compression methods are generally used depending on the nature of the processed data.
Thus, for example, for two-dimensional graphic data, it is known to use a wavelet compression method which belongs to the class of lossy compression methods. This method is one of the most famous compression methods used in graphic processing, (example JPEG2000 for two-dimensional graphic data).
The method is based on the known wavelet transform which, applied to graphic data, converts the pixels forming this data into wavelet coefficients. The distribution of values for the wavelet coefficients is usually centered around zero, with few large coefficients. As all the information are concentrated in a small fraction of the coefficients, wavelet coefficients of the input data can be compressed more easily compared to the original input data.
However, the existing data compression methods are not sufficiently fast in processing of large volumes of data so as the gain of time obtained by using an I/O processing with compressed data may be lost. In other words, for large volumes of data, the computing cost of compression is really expensive, and makes the usage of compressed data during I/O processing no attractive.
The present invention aims to improve the rapidity of data compression so as it can be widely used while I/O processing even with large volumes of data.