There are numerous applications where it would be advantageous for a person to be able to view blocked scenes, for example, around corners, behind blocking structures, or inside buildings or other structures. Mounted mirrors are sometimes used to allow viewers to solve the blind corner problem. Large display screens are used at performance and sports venues to allow the audience better views of the stage, court, or playing field. Some systems include the use of remote cameras to assist in surveillance. However, conventional systems do not adjust the viewpoint, and therefore when a person uses a remote camera to “see” around corners, for example, the person may lose orientation due to the need to mentally translate the remote camera's view to his/her own viewpoint.
Events today are often simultaneously recorded by many cameras equipped with communications capability, yet there exists no method to facilitate rapid exchange of information to enable each user to collaborate with all others to yield a three-dimensional (3D) rendered scene with improved information of interest. Many conventional photogrammetric and computer vision techniques exist to extract 3D information from a multi-view set of information, but these techniques either require all users to share their views with all other users, or users post their collected views to a central server which then distributes the results. In either case, the communication and processing load can be an impediment to real time processing of information.