The amount of data representing media information such as still image and video image can be extremely large. Further, transmitting digital video information over networks can consume large amounts of bandwidth. The cost of transmitting data from one location to another is a function of number of bits transmitted per second. Typically, higher bit transfer rates are associated with increased cost. Higher bit rates also progressively add to required storage capacities of memory systems, thereby increasing storage cost. Thus, at given quality level, it is much more cost effective to use fewer bits, as opposed to more bits, to store digital images and videos.
It is therefore desirable to compress media data for recording, transmitting and storing. For a typical compression scheme, the general result is that achieving higher media quality requires more bits used, which, in tum, increases cost of transmission and storage. Moreover, while lower bandwidth traffic is desired so is higher quality media. Existing systems and methods have limited efficiency and effectiveness.
A codec is a device capable of coding and/or decoding digital media data. The term codec is derived from a combination of the terms code and decode, or compress and decompress. Codecs can reduce number of bits required to transmit signals thereby reducing associated transmission costs. A variety of codecs are commercially available. Generally speaking, for example, codec classifications include discrete cosine transfer codecs, fractal codecs, and wavelet codecs.
In general, lossless data compression amounts to reducing or removing redundancies that exist in data. Further, media information can be compressed with information loss even if there are no redundancies. This compression scheme relies on an assumption that some information can be neglected. Under such a scheme, image and video features which the human eye is not sensitive to are removed and features that the eye is sensitive to are retained.
Most video compression techniques and devices employ an encoding scheme based on motion compensation and transformation. For example, according to a general process of encoding video information, a digital video signal undergoes intra prediction or inter prediction using motion compensation to produce a residual signal, then the residual signal is converted to transform coefficients using a transform algorithm, following which the transform coefficients are quantized, and then entropy encoding, such as variable length coding, or arithmetic coding, is performed on the quantized transform coefficient as well as coding modes and motion vectors used in intra prediction or motion compensation phase. To decode, an entropy decoder converts compressed data from an encoder to coding modes, motion vectors, and quantized transform coefficients. The quantized transform coefficients are inverse-quantized and inverse-transformed to generate the residual signal, and then a decoded image is reconstructed by compositing the residual signal with a prediction signal using coding modes and motion vectors, and stored in memory. At a given bit rate, the amount of difference between video input and reconstructed video output is an indication of quality of compression technique. The highest quality technique would yield signal reconstruction closest to the original video input.
Presence of noise in a media signal can have significant impact on compression efficiency. As noise is random, it is typically hard to compress because of lower predictability or redundancy. Noise can be introduced into media signals from one or more sources. For example, artifacts can originate from imaging and recording equipment, from environmental circuitry, from transmission equipment, from communication channels, or from codecs.