Video compression makes it possible to use video data in many different ways. Without it, distribution of video by tape, disk, network, or any other form would be much more difficult. Due to ever increasing video resolutions, and rising expectations for high quality video images, a high demand exists for efficient image data compression of video. Video compression is computationally intense, and requires speed tradeoffs for quality. This is true for any video coding standard format such as Advanced Video Coding (AVC), H.265/HEVC (High Efficiency Video Coding), VP #, and other video coding standards. The aforementioned standards use expanded forms of traditional approaches to address the insufficient compression and/or quality problem, but often the results are still insufficient and require a relatively large amount of computations and time to compute the results. Thus, high resolution encoding is often too slow, and even slower than real time.
One solution to reduce delays caused by encoding is to use graphics hardware accelerators that have special purpose hardware to process large amounts of video coding data. This hardware can frequently process multiple streams faster than real time. Multiple video encode acceleration, however, raises other problems. The accelerator still must process frame data sufficiently fast to permit an encoder to provide frames at a certain target frame rate across multiple encodes. To guarantee these frame rates for all possible inputs, no matter the complexity of the images for example or other variations in image content or delivery, conservative encoder quality/performance tradeoff settings must be used to account for worst case scenarios. This sacrifices opportunities to increase overall quality of the video encode. No mechanism exists to automatically tune the encoder settings based on the rate or latency of image content being processed to maximize quality while maintaining a specified output frame rate across multiple encodes.