Rendering of media (image/audio/speech/video/audio-video) content on any system including television, cinema halls, personal computer is generally based on a single media stream. A viewer has limited options as far as content is concerned. The role of the viewer in conventional content rendering devices is quite passive. The user can only increase or decrease the volume, change the brightness or contrast level of the rendering device and can record or playback content among other features that are available on content rendering devices. However, if a user wants to immerse himself in the scene he has to watch the content with great interest and involvement which is not possible in all situations.
To overcome the abovementioned disadvantages, contents have been encoded with surround sound and vision parameters wherein the user is able to feel the sense of direction and is able to feel realism because of sound, lighting effects and three dimensional displays. However, experience of feeling the situation or content is still not achieved by the abovementioned solutions.
In light of the abovementioned disadvantages, there is a need for a system and method for providing an immersive enhanced content experience to the users. Further, there is a need to employ environmental parameters in synchronization with the media stream. In addition, there is a need to insert environmental parameters in the stream during encoding for better depiction of content and to generate real effects while rendering. Furthermore, there is a need to encode environmental parameters separately with frame sync and/or other stream elements synchronization information and play it back along with media stream with the help of a device or play it along with media content.