With consumer adoption of tablets and smartphones, it is now possible to take advantage of the sense of touch when interacting with content items such as video and pictures. For example, it is now possible to allow someone to “feel” the texture or the roughness of a material rendered in an image. This “feel” may occur when he/she is touching the image by means of haptic effects such as vibrations generated by actuators embedded in end-user devices or by roughness variations using dedicated “smart surfaces” as published in “Geometrical optimization of an ultrasonic tactile plate for surface texture rendering” by Peter Sergeant, Frédéric Giraud and Betty Lemaire-Semail in 2010.
When a user interacts via a device such as a mouse, pseudo-haptic techniques can indirectly allow the user to feel texture or relief of material(s) rendered in the image. This introduces a discrepancy between the motion of a handled device and a position of the cursor on the displayed image as described in “Simulating haptic feedback using vision: A survey of research and applications of pseudo-haptic feedback” by Anatole Lécuyer in 2009.
To enhance sensory quality, when a user touches a screen or controls a mouse, sound(s) may be generated while touching the screen or controlling the mouse. The sound(s) may correspond to the sound that would be generated when touching or rubbing the material itself. To reach that aim, the sound generated when touching the material may be recorded and replayed when a representation of the material is touched on the tablet/smartphone screen. Such method introduces some limitations. For example, the sound played may have a limited duration. Looping this short-duration sound during a long interaction may introduce artifacts (stitching problem).