The consumption of video content is dramatically increasing due, at least in part, to availability of VOD (Video on Demand) services, the increasing number of live services, and the proliferation of devices on which video may be accessed. Video content can be accessed by many different types of devices, such as smart phones, tablets, personal computers, televisions, and game consoles, which are connected to various types of networks including broadcast, satellite, cellular, ADSL, and fibre.
Due to the large size of raw video when stored in digital form, digital video content is generally accessed in a compressed format. Consequently, video content is generally expressed in, and associated with, a particular video compression standard.
FIG. 1 is an illustration of a prior art channel encoder. The structure of FIG. 1 is based on that described in European Patent Application Publication Number EP 0915623, filed on Oct. 23, 1998, entitled “Device for Processing Coded Video Data and Systems for Distributing Programmes Using Such a Device,” which is hereby incorporated by reference for all purposes as if fully set forth herein. As shown in FIG. 1, the channel encoder 100 receives a stream comprising a video stream, one or more accompanying audio streams, and one or more accompanying data streams. A number of audio streams which may correspond to the sound track of the video content in different languages or different audio encoding standards may be included within the received stream. Similarly, data streams relaying subtitles, Teletext, Interactive applications, tickers, and the like may be provided in respective data streams.
The video streams, audio streams, and one or more accompanying data streams comprised within the received stream are separated by a demultiplexer 101. As shown, the encoded video stream is sent to a video analysis module 110, each audio stream is sent to a respective audio encoder belonging to an audio processing group 130, and each data stream is sent to a respective data encoder belonging to a data processing group (not shown in FIG. 1). The stream of data which enters the device 100 may comprise a compressed data source such as, for example, a compressed archived data source (not shown) or an uncompressed video source with audio and data embedded in the Vertical or Horizontal Ancillary transport.
The input stream may be a compressed stream consisting of data representing a video programme in a Single Programme Transport Stream (SPTS), Multiple Program Transport Stream (MPTS), a broadcast transponder, Serial Digital Interface (SDI) base band video with Audio/data transport in VANC/HANC emanating from a Video production studio source, and so on. The input stream therefore consists of a succession of packets of video data, of packets of audio data, and of packets of data relating to at least one program consisting of video and audio data.
The input stream may be an uncompressed stream comprising video data, and ancillary data in horizontal and vertical blanking areas of the HD-SDI signal. In an uncompressed input stream the ancillary data may transport audio and auxiliary data. As such, it may therefore consist of a succession of video data lines, horizontal blanking lines mainly transporting auxiliary data and vertical blanking lines mainly transporting audio samples.
The compressed or uncompressed input stream is applied to the demultiplexer 101 whose function is to separate the packets or lines of video data from the remainder of the packets or lines of data contained in the stream. Thus, the packets of video data are extracted from a first output of the demultiplexer 101 while the packets of non-video data are extracted from a second output of the demultiplexer 101.
The packets of video data are recognized by means of the Packet Identifier (PID), and forwarded to the video preprocessing component 160. The uncompressed video stream is then forwarded to a video analysis unit 110 adapted to retrieve, in clear, values of coding parameters from coded video data, to a coding parameter editing unit 150 adapted to support modification of at least one coding parameter value on the basis of a bit rate instruction CS and to an encoder 170 for coding video data.
The coded video data which enters the channel encoder 100 may constitute a constant bit-rate stream of data VD1 emanating, for example, from an archived source or, more generally, from a source emanating from a compression step, which is not represented in FIG. 1. The video data is forwarded to video preprocessing component 160 and to the analysis unit 110. Video preprocessing component 160 decodes the video data and forwards the decoded video data to the encoding unit 170. The video analysis unit 110 generates, from the coded video data and uncompressed (decoded) video data, a signal Q consisting of data representing the values of the various coding parameters originally used to code the video data. The signal Q is applied to the coding parameter editing unit 150. The video analysis unit 110 may, as shown, forward the value Q to a decision facility (not shown). This decision facility additionally receives signals of the same kind as the signal Q originating from other devices.
FIG. 2 is an illustration of a video distribution system incorporating one or more video encoder units as described with reference to FIG. 1. As shown in FIG. 1, the system comprises a plurality of channel encoder units 211, 212, 213, 214, each of which may correspond to the channel encoder unit 100 as described with reference to FIG. 1. As such, each channel encoder unit outputs a respective Q signal, and receives a respective CS instruction. Each Q signal is conveyed to an allocator 220, which in turn determines and outputs the CS instruction to each encoder unit. As such, the encoder unit may manage the bandwidth attributed to each encoder, and the corresponding video stream that encoder handles, in an optimized manner. The outputs of each channel encoder are multiplexed together by multiplexer 230 for transmission.
Under the action of the bit rate instruction CS, the coding parameter editing unit 150 modifies the value of a coding parameter, such as the value of the coding parameter relating to the quantizing of the coefficients emanating from the DCT. The coding parameter editing unit 150 generates a new set of values of coding parameters, which the encoder 170 then applies to encode the video data so as to constitute an output video stream whose bit rate is different from the bit rate of the input video stream. Between the demultiplexer 101 and the multiplexer 180, the non-video data are delayed by the circuit 130. The function of the delay circuit 130 is to delay the non-video data by the duration of the processing of the video data. A stream of video and non-video data is reconstructed at the output of the multiplexer 180.
In video statistical multiplexing, a set of video components share a video pool bit rate. A bit rate is allocated to each video service according to the complexity of the incoming pictures. A low bit rate is assigned to a low complexity video service while a high bit rate is assigned to a high complexity video service. By this means, all video services produce constant video quality streams.
A static bit rate is reserved for audio and data. This reserved bit rate is maximized in order to support the maximum bit rate of each component. The reserved bit rate is not always fully used.
While the above described mechanism allows for a dynamic handling of video encoding, it takes no account of variations in transmission demands for non-video data. It is desirable to improve bandwidth management in a manner taking account of these factors.