As systems become more popular that record both the audio and video events of daily life, the ability to find information is becoming increasingly difficult. Even with capabilities as photo image face recognition and scene detection, the systems and interfaces available are limited in their ability to view large amounts of data, as well as the connections among the data elements.
Current prior art systems/user interfaces are generally limited to the two-dimensions of computer screens, with various shading/sizing methods to represent a third dimension. However, virtual universes allow users to navigate completely in three dimensions, thus providing a more natural way to search through data and information. Virtual universes allow avatars to “fly” around in three dimensions, while also allowing information from the real world to be dynamically integrated into the environment. However, virtual universes are still somewhat limited in their ability to efficiently organize and display various audio/video data associated with a user of the virtual universe and/or the user's avatar.