Video or other media signals may be used by a variety of devices, including televisions, broadcast systems, mobile devices, and both laptop and desktop computers. Typically, devices may display video in response to receipt of video or other media signals, often after decoding the signal from an encoded form. Video signals provided between devices are often encoded using one or more of a variety of encoding and/or compression techniques, and video signals are typically encoded in a manner to be decoded in accordance with a particular standard, such as MPEG-2, MPEG-4, and H.264/MPEG-4 Part 10. By encoding video or other media signals, then decoding the received signals, the amount of data provided between devices may be significantly reduced.
Video encoding typically proceeds by sequentially encoding macroblocks, or other coding units, of video data. Prediction coding may be used to generate predictive blocks and residual blocks, where the residual blocks represent a difference between a predictive block and the block being coded. Prediction coding may include spatial and/or temporal predictions to remove redundant data in video signals, thereby further improving data compression. Intracoding for example, is directed to spatial prediction and reducing the amount of spatial redundancy between blocks in a frame or slice. Intercoding, on the other hand, is directed toward temporal prediction and reducing the amount of temporal redundancy between blocks in successive frames or slices. Intercoding may make use of motion prediction to track movement between corresponding blocks of successive frames or slices.
Typically, in encoder implementations, including intracoding and intercoding based implementations, residual blocks (e.g., difference between actual and predicted blocks) may be transformed and/or quantized, and entropy encoded to generate a coded bitstream. The coded bitstream may be transmitted between the encoding device and the decoding device. Quantization may determine the amount of loss that may occur during the encoding of a video signal. That is, the amount of data that is removed from a video signal during an encoding process may depend on a quantization parameter.
Standard encoding methodologies (e.g. H.264, MPEG, etc.) may specify an available range of QP, a parameter used to determine quantization. This range may limit the available amount of quantization achievable with a video encoder. In general, a higher QP value may result in a greater degree of quantization, and therefore fewer bits in the quantized bitstream. Providing a maximum QP in an encoding standard, however, may limit the amount of bit reduction available, even if further bit reduction may be acceptable from a quality perspective.