The present invention relates to the decoding of an encoded video bitstream, and in particular to the parsing of that encoded video bitstream.
Contemporary video compression codecs allow high quality video to be greatly compressed, such that it may be conveniently transmitted or stored, before being decoded for subsequent display. Nevertheless, despite the great advances made in video compression technology, the continual drive towards ever higher quality of video means that high bandwidth transmissions of video data are commonplace. For example, high definition video at 1080p30 may require a bandwidth of between 8 Mbits/s (at relatively low quality) to around 45 Mbits/s (at high definition TV quality).
Various techniques are implemented to achieve the high levels of compression necessary to transmit high quality video data, such as entropy coding (e.g. context-adaptive binary arithmetic coding (CABAC) and variable length coding (VLC)). These techniques require the use of a bitstream parser to translate coded elements of the bitstream into a format which can be decoded into displayable video data. For example, a typical space saving technique when encoding video data is to translate commonly seen bit sequences into shorter codes representing those bit sequences. When such a bitstream is received at a video decoder one of the first steps required is to translate these codes back into the original bit sequences. Furthermore, parsing the bitstream enables the video decoder to identify the structure of the video bitstream, for example, where each frame begins and ends and the macroblocks that belong to each frame, such that those macroblocks may be decoded. It is also known to increase coding density by providing that the parsing of a given frame depends on the result of parsing a previous frame.
Various techniques for improving decoding performance are known. Firstly, it is known to provide dedicated hardware for each component of a video decoder which is optimised in terms of performance. However, this approach is costly to design and can be power hungry. Secondly, it is known to decode independent slices (consecutive sequences of macroblocks within a frame) in parallel by making use of multi-threading techniques. However, the ability to do this in the decoder depends on the slice format provided by the encoder, and in many real-world situations it is desirable that a video decoder does not impose any such constraints on the video encoder.
Some papers which look at the parallelisation of video decoding are: A. Azevedo et al., “Parallel H.264 Decoding on an Embedded Multicore Processor,” in Proceedings of the 4th International Conference on High Performance and Embedded Architectures and Compilers—HIPEAC, January 2009; C. H. Meenderinck et al., “Parallel Scalability of H.264”, in Proceedings of the first Workshop on Programmability Issues for Multi-Core Computers, 2008; and “Parallel Scalability of Video Decoders”, Journal of Signal Processing Systems, Springer—New York, Vol. 57 No. 2, November 2009, pp. 173-194. It will be recognised that these papers focus on the opportunities for the parallelisation of decoding at the full decode level.
Accordingly, it would be desirable to provide a technique which enabled a video decoder to increase the bitstream rate it can handle (consequently allowing a higher quality of video to be handled), without relying on expensive and power hungry dedicated hardware in the decoder, and furthermore without imposing requirements on the output of the encoder.