This disclosure relates generally to video encoding and, more particularly, to dynamically altering mode decisions used in video encoding.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
An electronic device may present visual representations of information as image frames displayed on an electronic display based on image data. Since image data may be received from another electronic device, such as a camera, or retrieved from storage on the electronic device, video image data may be encoded (e.g., compressed) to reduce size (e.g., number of bits) and, thus, resources (e.g., transmission bandwidth and/or memory) used to transmit and/or store image data. To display image frames, the electronic device may decode image data that has been encoded and instruct the electronic display to adjust luminance of display pixels based on the decoded image data.
To facilitate encoding, prediction modes may be used to indicate the image data by referencing other image data. For example, since successively displayed image frames may be generally similar, inter-frame prediction techniques may be used to indicate image data (e.g., a prediction unit) corresponding with a first image frame by referencing image data (e.g., a reference sample) corresponding with a second image frame, which may be displayed directly before or directly after the first image frame. To facilitate identifying the reference sample, a motion vector may indicate position of a reference sample in the second image frame relative to location of a prediction unit in the first image frame. In other words, instead of directly compressing the image data, the image data may be encoded based at least in part on a motion vector used to indicate desired value of the image data. Further, in other instances, intra-frame prediction techniques may be used to indicate image data (e.g., a prediction unit) corresponding with a first image frame by referencing image data (e.g., a reference sample) within the first image frame.
In some instances, image data may be captured for real-time or near real-time display and/or transmission. For example, when an image sensor (e.g., digital camera) captures image data, an electronic display may shortly thereafter display image frames based on the captured image data. Additionally or alternatively, an electronic device may shortly thereafter transmit the image frames to another electronic device and/or a network. As such, the ability to display and/or transmit in real-time or near real-time may be based at least in part on efficiency with which the image data is encoded, for example, using inter-frame or intra-frame prediction techniques. However, determining which prediction technique to use to encode image data may be computationally complex, for example, due to a number of clock cycles used to process distortion measurement calculations.