This invention is related in general to representation of digital information and more specifically to a system where a representation of a target digital data is used to recover the target data with a data cue derivable at a device that assists in recovering the target value.
Digital compression techniques have become extremely important to reduce the size of digital content and thereby improve overall bandwidth of digital systems. For example, popular standards such as Moving Picture Experts Group (MPEG) promulgate various standards for compression of digital video. Many approaches to data compression are in use for various types of digital content such as video, still image, audio, etc.
A measure of the performance of compression and coding schemes is the “compression ratio.” A compression ratio is a unit of the original content divided by the same unit's compressed size. For example, with schemes such as MPEG-2 and H.263 compression ratios of 20-30 are typically attainable.
A drawback of compression schemes is that they require digital processing to compress (or encode) the data at a transmitter, and also to decompress (or decode) compressed data at a receiver. Another property of many compression schemes is that higher compression ratios are only achieved by using more complex processing. In other words, to obtain higher compression ratios requires using more powerful and expensive processing chips, and results in a delay due to extended processing time. Low complexity processing typically achieves lower compression ratios.
For many of today's coding techniques e.g., digital video coding standards such as MPEG and H.26x, the overall complexity remains quite high and is unequally distributed between the encoder and the decoder. The encoder has a higher complexity and the decoder that operates in a slave mode to the encoder has a lower complexity. It is desirable to achieve a compression scheme that allows variance, management and distribution of processing complexity while also providing competitive compression performance.
For many of today's coding techniques such as digital video coding standards, the trend has been toward increasingly larger and more rigorous coding specifications, or syntaxes. This has resulted in higher compression ratios but has also made the compression schemes very inflexible. For example, the decoder operation is completely specified in terms of the prediction algorithm to be used. Every decoder must operate in the same way in order to successfully decode the highly specified syntax of the encoded streams. It is desirable to achieve a compression scheme that allows for more freedom in terms of algorithms that can be used at the decoder allowing smarter decoding algorithms to obtain better performance.
“Robustness” is the ability of a coding scheme to tolerate errors or dropouts in data and is another factor in data communication. Errors or dropouts can occur, for example, if a communication channel or physical link used to transfer data is “noisy” or otherwise prone to interference or deficiencies. Data corruption can also occur when data is being processed, stored or otherwise manipulated. Highly compressed data streams are usually more susceptible to errors that can occur during transmission than less compressed or uncompressed, data. Often, a compressed data stream transmission is made robust by adding forward error correction (FEC) codes to the compressed data stream or by allowing retransmissions (ARQ: Automatic Repeat reQuest) in case an error occurs. These approaches can require increased complexity and encoding delay. The latter results in increased delays and needs a communications channel in the reverse direction also. It is desirable to achieve a coding scheme that promises reliable performance with low complexity and low delay.