Digital video streams typically represent video using a sequence of frames (i.e. still images). An increasing number of applications today make use of digital video stream encoding for purposes other than traditional moving pictures (such as movies and video clips). For example, screen capture and screen casting applications generally represent the output of a computer monitor over time as a digital video stream, irrespective of the specialized nature of the content of the monitor. Typically, screen capture and screen casting digital video streams are encoded using video encoding techniques like those used for traditional moving pictures.
To permit transmission of digital video streams while limiting bandwidth consumption, a number of video compression schemes have been devised, including formats such as VPx, promulgated by Google, Inc. of Mountain View, Calif., and H.264, a standard promulgated by ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), including present and future versions thereof. H.264 is also known as MPEG-4 Part 10 or MPEG-4 AVC (formally, ISO/IEC 14496-10).
Various users of digital video streams may require or prefer compressed video at different bitrates (i.e. number of bits per second). One way to adjust the bitrate of a digital video stream is to change the number of frames encoded per second (FPS). In other words, a video stream encoded at 6 FPS may consume less bitrate (also referred to as bandwidth) than a video stream encoded at 12 FPS. In order to encode a video stream once, but still support varying bitrate requirements, existing video compressions schemes allow for encoding of a video stream into multiple layers.
These schemes are limited to a fixed number of layers, for example, three layers. The first layer, or base layer, contains a standalone compressed video stream that encodes the video stream at a low FPS, for example 3 FPS. Successive layers can be combined with the base layer to create a video stream at a higher FPS. The second layer, or intermediate layer contains a compressed video stream at a FPS equal to all of the layers below, in this case, 3 FPS. The intermediate layer, when combined with the base layer, effectively doubles the FPS of the base layer, for a combined FPS of 6. The frames in the intermediate layer are arranged with respect to time to be in between the frames in the base layer. The third layer, or high layer, contains a compressed video stream at a FPS equal to all of the layers below, in this case, 6 FPS. The high layer, combined with the layers below, effectively doubles the FPS of the layers below, for a combined FPS of 12. The frames in the high layer are arranged with respect to time to be in between the frames included in each of the base and intermediate layers.
In other words, these schemes are limited in the sense that they are unable to increment the FPS of a compressed video stream in a granular fashion. Rather, each layer of encoded video doubles the FPS of the layers below, resulting in an exponential increase in FPS as layers are included in the data sent to a user.