Scalable video coding supports decoders with different capabilities. An encoder generates multiple encoded bitstreams for an input video. This is in contrast to single layer coding, which only uses one encoded bitstream for a video. In the scalable video coding, one of the output encoded bitstreams, referred to as the base layer (BL), can be decoded by itself and this encoded bitstream provides the lowest scalability level of the video output. To achieve a higher level of video output, the decoder can process the base layer bitstream together with other encoded bitstreams, referred to as enhancement layers (EL). The enhancement layer may be added to the base layer to generate higher scalability levels. One example is spatial scalability, where the base layer represents the lowest resolution video and the decoder can generate higher resolution video using the base layer bitstream together with additional enhancement layer bitstreams. Thus, using additional enhancement layer bitstreams produce a better quality video output, such as by achieving temporal, signal-to-noise ratio (SNR), and spatial improvements.
In scalable video, an encoder may encode the base layer and enhancement layers. Further, parameter settings may be determined for the layers. For example, parameter settings for the base layer and any combination of the base layer and enhancement layers are determined. That is, if a combination of a base layer and an enhancement layer is available, the parameter settings govern the combined base layer and enhancement layer. The parameter settings may then be included in a video layer of the encoded bitstreams for the encoded base layer and enhancement layers. The pre-encoded video data is then stored in storage, such as an archive that stores the encoded layers and parameter settings. When a transmitter wants to send the video to one or more decoders, the transmitter may retrieve the encoded bitstreams for the applicable layers from storage and send the encoded bitstreams to the decoder.
When the user wants to add a layer for the video to the already encoded layers, the parameter settings for the layers stored in storage do not take into account the presence of the additional layer. To account for the additional layer, the parameter settings must be changed to reflect the addition of the layer. For example, changes to both the video layer and the transport stream for all of the pre-encoded layers may need to be changed for each picture. This is because the parameter settings for the base layer and enhancement layers are being governed by the combination of the enhancement layers with the base layer and thus the parameter settings may be dependent on the newly added enhancement layer. This introduces major complexity for re-distribution transmission equipment that send the pre-encoded video stream.
Furthermore, while the above relates to video encoding, similar problems exist for creating and managing MPEG-2 transport streams, which may include multiple streams including scalable video streams. MPEG-2 is the designation for a group of such standards, promulgated by the Moving Picture Experts Group (“MPEG”) as the ISO/IEC 13818 international standard. A typical use of MPEG-2 is to encode audio and video for broadcast signals, including signals transmitted by satellite and cable. Thus, MPEG-2 transport streams may be prone to issues related to adding and deleting layers due to the multiple layers in a scalable video stream.