Media assets often rely on the various senses of a user to convey information. For example, the context of a movie often relies on audio information (e.g., sounds, music, character dialogue, etc.) as well as video information (e.g., scenery, appearances of characters, actions performed by characters, etc.). Consequently, if users have a disability (e.g., blindness, deafness, etc.), the ability of the user to comprehend the movie may be limited.
While some methods of conveying audio information visually (e.g., subtitles) are known, these methods are typically limited to transcribing character dialogue and do little to describe the actual context of a media asset. Furthermore, these methods (e.g., subtitles) are generated in a “one-type-fits-all” manner, and therefore are not customized to compensate for the disabilities of a particular user.