Generally, in the context of image processing, an image frame to be displayed on a screen, e.g., a television screen, is represented by a matrix structure of digital information representing a grid of pixels, and multiple color components are assigned to each pixel. For example, the components of luminosity Y and chrominance Cr and Cb each possess a level or amplitude for the pixel considered. Such a pixel structure, or “bitmap”, therefore corresponds bit for bit or pixel by pixel, to the image (this is then referred to as a “raster” image) which has to be displayed on the screen. Generally speaking, the pixel structure is in the same format as that used for storage in the video memory of the screen. The raster frame thus stored in the video memory will be read pixel by pixel on a row and row by row. This is then referred to as a “raster scan”.
Currently, the size of the frames used for high-definition digital television (HDTV) is a size known as “2K1K”, i.e., including 1080 rows of 1920 pixels. Moreover, the frequency, i.e., the number of frames per second, is 60 Hz.
For transmitting such an image signal issued by the TV decoder on the wired connection linking this decoder to the television, compression of the image signal issued by the decoder is performed. Indeed, transmitting such an image signal without compression requires extremely high transfer speeds which are generally costly and create electromagnetic interference. This is the reason for compressing the signal issued by the decoder.
Compression/decompression processing may also be desirable for storing images in a memory internal or external to the decoder. A video signal is generally received in an encoded format, e.g., according to standards H264 or HEVC. The signal is decoded in an RGB or YCbCr image format more voluminous in memory space.
However, various processings are usually applied to the decoded images. Between each processing, the images are stored, e.g., in buffer memories, in the decoded format. To limit the capacity of the memories used, it is advantageous to compress the decoded images before storage in the memory, then decompress them when reading in the memory before processing. Such compressions/decompressions should not introduce any degradation of the images.
Currently, conventional compression of a video signal may be performed by applying a two-dimensional low-pass filter on the chrominance components of the image signal. However, even if the quality of the image finally displayed on the screen remains acceptable, high frequency information of the image signal may be lost.