Transform coding and decoding of video data usually includes what is called entropy coding. For compression, the pixel information of a picture, e.g., a residual picture after motion compensated prediction, or of a picture for intra-coding is divided into blocks. The blocks are transformed, e.g., by a discrete cosine transform (DCT) or a similar transform, and the resulting transform coefficients are quantized. The quantized transform coefficients are ordered, e.g., from low to higher frequencies along a path in the two dimensional transform domain. The ordered series of quantized transform coefficients is then losslessly encoded by an entropy coding method. One popular entropy coding method is variable length coding in which one or more events, representing one or more quantized coefficients of properties thereof, are encoded by codewords such that events that are more likely-to-occur are encoded on average by codewords that are shorter than are events that are less likely-to-occur. Variable length coding (VLC), due to its nice tradeoff in efficiency and simplicity, has been widely used in entropy coding, particularly when the codec is desired to have low computational complexity.
Recent video coder/decoders (codecs) such as those conforming to H.264/AVC and China's AVS standard (Audio Video Coding Standard of China), take into account the context of transform blocks to reduce inter-coefficient redundancy. H.264/AVC describes a method called context-based adaptive variable length coding (CAVLC), while China's AVS describes context-based 2D variable length coding (C2DVLC). Each of these uses multiple VLC tables and performs context-based adaptive table switching while coding the coefficients of a block