Video transmission over a wireless channel suffers from erroneous transmission much more than transmission over a wireline. In a wireless channel, average error rates up to 10% are quite common, resulting in an unacceptable quality of the received video application. It will be appreciated therefore that channel coding is needed in order to bring the bit error rate down to an acceptable level. Classically, after removing the source redundancy, channel coding is performed independently from the source compression scheme, relying on techniques such as Shannon's separation theorem.
However, it will be appreciated that given the considered channel, characterised by tight constraints in term of bandwidth and delay, and given the residual redundancy in the source compression scheme, a joint source-channel coding approach is advisable. More precisely, the channel coding and decoding may take advantage of this residual redundancy. A suitable technique taking into account the characteristics of both the wireless channel and of the application should be thus considered.
Specifically, the information about the different sensitivity of source bits to channel errors should be exploited through Unequal Error Protection. This technique consists in performing error protection according to the perceived sensitivity of source bits to errors: more sensitive bits are protected with a lower rate code, for less important bits a higher rate code is used.
Compared to classical Forward Error Correction, UEP allows achieving a higher perceived video quality given the same bit-rate, through the exploitation of the characteristics of the source. Such a technique is described in EP 1 018 815 of Motorola which describes a method and apparatus for processing information for transmission in a communication system.
This approach can be advantageously combined with the data-partitioning tool available in the MPEG-4 standard, as described in MPEG-4 Video Group, “Overview of the MPEG-4 Standard”, ISO/IEC JTC1/SC29/WG11 N3444, Geneva, May-June 2000: wherein information bits contained in each packet are separated in three partitions, each of which has a different sensitivity to channel errors. Using the examples illustrated in of FIG. 1 a typical P frame 100 comprises partitions which consist of a packet start STRT preceding a header 101, a motion partition 102 and a texture partition 103, separated by a motion marker 104. Similarly, for I frames 120, partitions comprise a header 121, a DC partition 122 and a AC partition 123 separated by a DC marker 124, and the three partitions are protected using different code rates. The three partitions in each example are protected with different code rates, according to the subjective importance of the relevant information.
Information contained in headers is crucial for the successive decoding of the packet, thus those should be strongly protected. Using the example of the P frame, it will be appreciated that motion data should be more protected than texture data, as if motion information is correctly received texture information may be partially reconstructed, in that without the texture information the decoder can still perform motion-compensated concealment without too much degradation of the reconstructed picture.
The main problem in the application of such a scheme is the fact that packets, like partitions, are not of the same length, thus the UEP scheme should be dynamically changed for each packet and the knowledge of each partition length is required. In order to cope with this problem, techniques using either fixed proportional lengths or lengths read from a field opportunely inserted in the bitstream have been suggested in M. G. Martini, M. Chiani, “Proportional Unequal Error Protection for MPEG-4 Video Transmission”, proc. IEEE International Conference on Communications (ICC) 2001, pp. 1033-1037, Helsinki, June 2001, and M. G. Martini, M. Chiani, “Robust Transmission of MPEG-4 Video: Start Codes Substitution and Length Field Insertion Assisted Unequal Error Protection”, Picture Coding Symposium—PCS 2001, Seoul, April 2001.
Although these techniques enable the protection of motion data more than textures data, they still suffer in that no compensation is effected for differing types of motion or textures data. Errors on certain portions of the scene, such as high-motion or highly detailed areas, are more annoying than errors on less active regions, and the known techniques are not adapted to compensate for such variances. There is therefore a need to protect regions with high motion and/or texture activity more than low-active areas.