Video decoding nowadays attempts to recreate the original source as good as possible. This is done under the assumption that the viewing conditions are in line with the expected scenarios envisioned when recording/editing the footage. Common assumptions include an average-lit room with an average-abled viewer at ‘normal’ distance from an average-sized TV set. While this suits the average usage scenarios, it is not optimal (in terms of compression, user experience, complexity) for many use cases.
The existing solution offers an average-fits-all solution where the content creator assumes the viewing condition, and creates the content to match this assumption.
Modern televisions do offer an—albeit small—adaptation possibility by allowing the user to manually choose a certain viewing profile. This drives a certain image processing profile to enhance the viewing experience in some ways (e.g. a ‘movie’ profile which attempts to enhance the dynamic range of the content). While this adaptation can be useful, it is restricted to (slightly) enhancing the user experience and needs to be configured manually. Furthermore, this only adapts to a very small part of the viewing context, i.e. the viewing device itself. Additionally, these post processing steps do not take into account any previous coding steps and configuration; thus providing sub-optimal performance.
In the domain of scalable video, other adaptation mechanisms are common. Different layers are available in the codec in order to increase/decrease quality of the video, at the expense of bandwidth. The selection of these layers is configured by the network in order to optimize for bandwidth (and, implicitly for user experience by limiting the number of video freezes). Other systems allow for manual selection of the appropriate layers by the end-users. The network either configures the system based on network usage, or the user needs to configure the codec himself manually.
It is an object of the present invention to provide video to a user in a more optimal manner.