This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
In U.S. Pat. No. 3,050,870, Heilig introduced Sensorama, a mechanical device where users could watch a movie, sense vibrations, feel the wind and smell odors. This pioneering work dating from the sixties opened the path towards the addition of haptics to improve the immersion of a user in virtual reality environments. Motion simulation is one haptic effect and is for example used in professional flight simulators but also amusement parks for enhancing video viewing experience with haptic effects of motion. Most of these systems are using a variation of Stewart's platform described by Dasgupta in “The Stewart platform manipulator: a review”, Basically, a motion simulator can simply be understood as a seat attached on a platform able to move in any direction and is hereafter called a haptic device. Therefore, the user's whole body is moved to generate various sensations such as accelerating, falling or passing over bumps. Such motion simulators have an effect on the vestibular system of the user that allows a human user to sense acceleration and rotation of his body and therefore to feel its movement.
When applied to audiovisual content, haptic feedback may be used to improve the immersion of the viewer into the content. This combination is known under the acronym HAV, standing for Haptics Audio Visual. In HAV systems, the relation between users and content is no more limited to a passive context where the user just listens to the sounds and watches the images but is enhanced through physical actions on the user's body that induce physical sensations in relation with the audiovisual content. As a result the experience's emotional impact is magnified. For that purpose, the signal representing the audiovisual content needs to be enhanced with information about the haptic effects to be performed. This is done by first generating the appropriate haptic effect information and then combining it with the audiovisual data, thus creating a HAV content to be rendered by a HAV rendering device. When creating the haptic data, the creator of haptic data has to specify the type and parameters of the effects. There some situations where the creator knows the capabilities of the HAV rendering device, as is the case in closed systems such as those used in amusement parks or so-called 4D movie theaters. In this context, the creator adapts the effects to these capabilities. In the case of home users' haptic rendering devices, the situation is different since there is a huge diversity of rendering capabilities. Typical example is the range of movement for a motion platform or haptic device. Cheap motion platforms may have a more limited range than more expensive devices. In order to render correctly the desired haptic effect, the creator needs to adapt the haptic rendering information to each rendering device and therefore may provide multiple versions of the HAV content.
Creating, distributing and rendering haptic feedback means that a representation format is required. Besides proprietary formats, MPEG-V architecture is one standard formalization of the workflow for HAV content. MPEG-V standard (MPEG-V. 2011 ISO/IEC 23005),) is specifically designed to provide multi-sensorial content associated with audiovisual data, in order to control multimedia presentations and applications by using advanced interaction devices such as HAV rendering devices. MPEG-V particularly describes “sensory effects” which includes haptic effects.
In the domain of haptic rendering, a major issue is related to the physical limitations associated with the rendering devices. Indeed, for a rendering device supposed to provide heat sensation to the user, for example to be associated with an explosion in an action movie, there is some inertia to deliver the heat and then to cool again. As a consequence, in some cases it is not possible to combine the succession of some haptic effects. In the case of a haptic device, when two “forward acceleration” effects need to be rendered, the haptic device may not have enough movement range to combine the two required movements. To overcome this issue, the haptic device will first move forward to deliver the acceleration feeling, then it will have to go back to its original position before moving forward again to deliver the second acceleration. This intermediary step of moving back to the original position between two effects is called “washout” and should not be perceived by the user. In order to be unnoticed by the user, the acceleration of the movement must be smaller than the threshold of the vestibular system which is around 0.1 m/s2. A major issue about washouts is that, due to the diversity of HAV rendering devices and their different physical limitations, it is very difficult for creators of HAV contents to make use of washouts since they are strongly related to the physical limitations and capabilities of the rendering devices. In the general case, these limitations are unknown at the creation stage.
It can therefore be appreciated that there is a need for a solution that addresses at least some of the problems of the prior art. The present disclosure provides such a solution.