1. Field of the Invention
The present invention relates to the transmission of data via an erroneous transmission channel, and in particular to a concept of concealing an error in an erroneous or potentially erroneous information unit which has been transmitted via the erroneous channel.
2. Description of Prior Art
In particular with audio and/or video coding there is a need to make the coding/decoding concepts robust against transmission errors. It has been known, in particular from wireless transmission technology, to perform a forward error correction (FEC) on the coder side. By means of this concept, a redundancy is introduced into the data stream by a coder. This redundancy may then be exploited in a decoder, e.g. using a Viterbi decoding block, so as to correct any transmission errors that developed in the transmission. This method is disadvantageous in that by adding a redundancy, the transmission rate via the channel is increased in the coder. Particularly with highly disturbed transmission channels, such as wireless transmission channels, however, there is often no alternative choice so as to enable reliable reception in the receiver/decoder with non-optimal channel conditions.
On the other hand, a main goal, especially with audio or video compression methods, is to compress audio or video data as highly as possible to enable a transmission via channels which are typically not highly disturbed, such as line-conducted channels, which allow a limited data rate. This is why such prior-art compression methods, as are standardized in the MPEG family, often utilize entropy codes for entropy-coding quantized data, such as spectral values. A known representative of an entropy-coding method is so called Huffman coding which enables coding of a set of values with a nearly minimum redundancy. Prior to assigning the Huffman code words to the individual information units, a statistic of the information units is determined so as to associate the most frequently occurring information unit with a code word having as short a length as possible, whereas an information unit which occurs very rarely is assigned a code word with a longer length (with regard to the number of bits).
Huffman codes are problematic in that one cannot see, from a data stream of Huffman code words, where a code word starts (aside from the first code word), and where a code word ends. A data stream of Huffman code words typically consists of a string of binary ones and zeros. A decoder decodes such a data stream of Huffman code words while knowing the table of codes on which the coding was based. The table of codes, which may also be represented as a code tree, is configured such that the end of a code word inherently results because the code is free from prefixes (in the code tree, only leaves are valid code words). The Huffman code has the characteristic that all “branches” of the tree are complete, i.e. lead to valid code words.
If during the transmission of such a data stream of Huffman code words a transmission error arises, this will almost inevitably mean that even though all code words have been decoded correctly up to the occurrence of the error, all code words are decoded in an erroneous manner after the occurrence of the error, which may relate to, e.g., only one bit. If a code is selected wherein all branches have been terminated with valid code words, the decoder will keep on decoding irrespective of the error and will not establish before the end of the data stream that somewhere further upstream an error has occurred, if for the last code words, the bits in the data stream run out or if any bits remain. Thus, the Huffman code as an example of an entropy code is favorable from the point of view of data rate compression. However, error robustness is minimal, since as little as one bit error in the very first code word is very likely to lead to the fact that all other subsequent code words will be recognized incorrectly by the decoder even though they have not been affected by the bit error.
A more error-robust approach has been described in U.S. Pat. No. 5,852,469. Instead of using Huffman codes, said U.S. patent suggests using reversible codes (i.e. codes which may be decoded on both sides) having code words of variable lengths (referred to below as reversible code words), such as symmetrical code words. These codes which in addition to the property of the freedom from prefixes also have the property of being free from suffixes, are also referred to as RVL (reversible variable length). Moreover, an additional backward decoder is used instead of the forward decoder which is sufficient for a Huffman decoder. The forward decoder performs a decoding from a starting point of a block of reversible code words of variable lengths, whereas the backward decoder becomes effective/starts from an end point of the block of reversible code words of variable lengths.
Reversible code words, such as symmetrical code words, may also have the property that they lead to a code tree wherein not all branches are terminated by valid code words. Therefore, it is during the course of the decoding already that a decoder will recognize whether an error has occurred, if the decoder comes across such an invalid code word, i.e. a code word not provided in the table of codes. However, the decoder cannot determine with certainty whether the error was located precisely in the code word which has been recognized as invalid. Since the number of valid code words of this table of codes is selected to be as small as possible for reasons of data compression, only a few invalid code words exist, so that a decoder may not come across an invalid code word until several code words after the occurrence of the transmission error in the bit stream. Therefore, an error in the bit stream leads to continuation errors which, however, do not relate to the entire remaining bit stream anymore, as they do with the Huffman code, but which typically propagate only via several code words after the occurrence of the error. For error limitation, the backward decoder is provided in addition to the forward decoder, which backward decoder will still output a few code words as seemingly correct code words in the backward direction, also due to the continuation errors, if it is assumed that only one bit error exists in the data stream. An error may be recognized if the forward decoder and the backward decoder output different decoded information units for information units of the same ordinal number.
Therefore, U.S. Pat. No. 5,852,469 suggests discarding all outputs in this error area and using only information units that have been coded, starting from the starting point of the block of code words up to the beginning of the error or overlap area, for receiver-side further processing, and using, in addition, decoded information units only which have been decoded from the end point of the block of code words up to the end (seen in the forward direction) of the overlap area.
DE 198 40 835 A1 also discloses a device and a method for entropy-coding information words, and a device and method for decoding entropy-coded information words. For information units decoded in the error or overlap area it is suggested to utilize an error concealment technique. Potential error concealment techniques consist in simply replacing an erroneous value by its adjacent, intact value. If both intact values which are adjacent to an error are known, weighted mean values from the left and right edges may also be used to artificially replace, i.e. conceal, the erroneous value. Further error concealment techniques that have been mentioned use interpolation using two adjacent values which have an error sandwiched between them. Similarly, a one-sided prediction may be performed from the front or from the back, relating to the overlap area, so as to replace an erroneous value by a value which is “possibly relatively intact”.
A disadvantage of this concept is the fact that it is problematic if several successive information units have been affected by continuation errors. Interpolation or prediction-based concealing techniques in this case will very soon find their limitations, and the quality of the error concealment will decrease more and more strongly.
U.S. Pat. No. 6,104,754 discloses a moving-picture coder/decoder system using a coding with code words of variable lengths. What is worked with, in particular, are reversible code words of variable lengths which may be decoded from the front or from the back. If a forward decoder determines an error and if a backward decoder also establishes an error, the area including the two errors is discarded if it does not overlap. If, on the other hand, the area overlaps, what is taken is the output of forward decoder up to the error, excluding the error. From the error onwards, the output of the backward decoder is taken. Alternatively, the output of the backward decoder up to the error may also be taken, and then the output of the forward decoder from the error onwards. If an error is found only in the forward decoder, the output of the forward decoder up to the error is taken, and the output of the backward decoder from the error is taken. If both decoders determine an error in the same code word, the erroneous code word is discarded, the output of the forward decoder up to the error is taken, and the output of the backward decoder up to the error is taken.
DE 19959038 A1 discloses a method of decoding digital audio data, wherein error recognition is performed in dependence on reference values transmitted, preferably scale factors. Here, reference values of a frequency range are compared with previous reference values of the same frequency range so as to generate a feature which is compared with a threshold value. If the feature exceeds the preset threshold value, this is indicated by means of signaling. For error concealment, reference values marked as errors are replaced by previous reference values which have been stored.