Video signals may be used by a variety of devices, including televisions, broadcast systems, mobile devices, and both laptop and desktop computers. Typically, devices may display video in response to receipt of video signals, often after decoding the signal from an encoded bitstream. Video signals provided between devices are often encoded using one or more of a variety of encoding and/or compression techniques, and video signals are typically encoded in a manner to be decoded in accordance with a particular standard, such as H.264 and HEVC. By encoding video signals, then decoding the received signals, the amount of data transmitted between devices may be significantly reduced.
Video encoding is typically employed by encoding macroblocks, or other coding units, of video data. Predictive encoding may be used to generate predictive blocks and residual blocks, where the residual blocks represent a difference between a predictive block and the block being coded. Prediction coding may include spatial and/or temporal predictions to remove redundant data in video signals, thereby further increasing the reduction of data. Intracoding for example, is directed to spatial prediction and reducing the amount of spatial redundancy between blocks in a frame or slice. Intercoding, on the other hand, is directed toward temporal prediction and reducing the amount of temporal redundancy between blocks in successive frames or slices. Intercoding may make use of motion prediction to track movement between corresponding blocks of successive frames or slices.
Typically, syntax elements, such as coefficients and motion vectors, may be encoded using one of a variety of encoding techniques (e.g., entropy encoding), and several approaches may further attempt to optimize syntax elements. Many video encoding methodologies make use of some form of trade-off between an achievable data rate and the magnitude of distortion in a decoded signal. Encoding in this manner is often computationally demanding and poses significant challenges when applied in real-time applications. While in some instances, parallel processing may address relatively high computational demand, temporal and/or spatial dependencies existing between respective portions of a video signal may preclude use of conventional parallel processing approaches.