Wireless transmission is characterized by a relatively low transmission bit rate and a high error rate. Where the transmission bit rate is concerned, a low bit rate means bit rate values less than 100 kbit/s, for example GSM channels at 14 kbit/s. High error rates means error rates greater than 10−6, or even greater than 10−4.
The ITU-T H.263+ standard proposes solutions for video source coding. The standard comprises an obligatory common part and a set of optional appendices which can be implemented in addition to the common part of the standard. The standard gives no indication as to the quality of the pictures obtained or the combination of appendices that apply in given circumstances, in particular for wireless transmission. Implementing all the appendices yields a system that is difficult to apply to wireless transmission because of the low transmission bit rate available and the error rate.
The common part of the H.263+ standard proposes block coding with prediction. A distinction is made in a sequence of pictures between pictures that are transmitted integrally (referred to as “I” pictures), pictures that are not transmitted integrally but predicted from a preceding picture (referred to as “P” pictures), and pictures that are not transmitted integrally but which are predicted from a preceding picture and a succeeding picture. The coding process uses the following steps for a picture including blocks or macroblocks made up of a plurality of blocks, typically six blocks, of which four are luminance blocks and two are chrominance blocks:                estimating motion of blocks or macroblocks of each picture,        predicting motion compensation relative to a reference picture, and        coding for transmission, typically by coding with compression (including discrete cosine transform (DCT), quantizing and variable length coding (VLC)).        
Appendix R of the above standard, entitled “Independent segment decoding mode”, proposes to use video picture segments made up of a set of blocks for motion estimation processing; motion is estimated independently in each segment. In this case, the boundaries of a segment are treated as boundaries of the picture for the purposes of decoding, including for the purposes of processing motion vectors that cross boundaries. Motion vectors that cross boundaries are prohibited in the absence of the optional modes of appendix D, F, J and O. If the unrestricted motion vector mode of appendix D is used, the boundaries of the current video picture segment are extrapolated to constitute predictions of pixels outside segment regions.
Appendix N of the standard, entitled “Unrestricted motion vector mode”, proposes an exception to the principle stated in the common part of the standard, whereby motion vectors are restricted so that all the pixels to which they refer are in the coded area of the picture. To be more specific, appendix N proposes that motion vectors be allowed to point outside the picture. If a pixel to which a motion vector refers is outside the coded picture surface, an edge pixel is used in its place. The value of the edge pixel is obtained by limiting the motion vector to the last position corresponding to an entire pixel within the coded picture surface. Motion vector limitation is based on one pixel and is implemented separately for each component of the motion vector.
Other appendices propose that the motion vectors be allowed to cross the boundaries of a picture or picture segment. This applies to appendices F, entitled “Advanced prediction mode”, J, entitled “Deblocking filter mode”, and O, entitled “Temporal, SNR and spatial scalability”.