As the processing power of microprocessors and the quality of graphics systems have increased, environment mapping systems have become feasible on personal computer systems. Environment mapping systems use computer graphics to display the surroundings or environment of a theoretical viewer. Ideally, a user of the environment mapping system can view the environment at any angle or elevation. FIG. 1 illustrates the construct used in conventional environment mapping systems. A viewer 105 (represented by an angle with a curve across the angle) is centered at the origin of a three dimensional space having x, y, and z coordinates. The environment of viewer 105 (i.e., what the viewer can see) is ideally represented by a sphere 110, which surrounds viewer 105. Generally, for ease of calculation, sphere 110 is defined with a radius of 1 and is centered at the origin of the three dimensional space. More specifically, the environment of viewer 105 is captured and then re-projected onto the inner surface of sphere 110. Viewer 105 has a view window 130 which defines the amount of sphere 110 viewer 105 can see at any given moment. View window 130 is typically displayed on a display unit for the user of the environment mapping system.
Conventional environment mapping systems include an environment capture system and an environment display system. The environment capture system creates an environment map which contains the necessary data to recreate the environment of viewer 105. The environment display system displays portions of the environment in view window 130 based on the field of view of the user of the environment display system. An environment display system is described in detail by Hashimoto et al., in co-pending U.S. patent application Ser. No. 09/505,337, entitled “POLYGONAL CURVATURE MAPPING TO INCREASE TEXTURE EFFICIENCY.” Typically, the environment capture system includes a camera system to capture the entire environment of viewer 105. Specifically, the field of view of the camera system must encompass the totality of the inner surface of sphere 110.
An extension to environment mapping is generating and displaying immersive videos. Immersive videos involves creating multiple environment maps, ideally at a rate of 30 frames a second, and displaying appropriate sections of the multiple environment maps for viewer 105, also ideally at a rate of 30 frames a second. Immersive videos are used to provide a dynamic environment rather than a single static environment as provided by a single environment map. Alternatively, immersive video techniques allow the location of viewer 105 to be moved. For example, an immersive video can be made to capture a flight in the Grand Canyon. The user of an immersive video display system would be able to take the flight and look out at the Grand Canyon at any angle. Camera systems for environment mappings can be easily converted for use with immersive videos by using video cameras in place of still image cameras.
Many conventional camera systems exist to capture the entire environment of viewer 105. For example, cameras can be adapted to use hemispherical lens to capture a hemisphere of sphere 110, i.e. half of the environment of viewer 105. By using two camera with hemispherical lens the entire environment of viewer 105 can be captured. However, the images captured by a camera with a hemispherical lens require intensive processing to remove the distortions caused by the hemispherical lens. Furthermore, two cameras provides very limited resolution for capturing the environment of viewer 105. Thus, environment mapping using images captured with cameras using hemispherical lenses can only produce low resolution displays while still requiring intensive processing.
Other camera systems use multiple outward facing cameras based on the five regular polyhedrons, also known as the platonic solids. Specifically, each cameras of the camera system point radially outward from a common point, e.g. the origin of the three dimensional space, towards the center of a face of the regular polyhedron. For example, as illustrated in FIG. 2, conceptually, a cube 220 (also called a hexahedron) surrounds sphere 110. As illustrated in FIG. 2(b) camera system 250 includes cameras 251, 252, 253, 254, 255, and 256. Camera 256, which is obstructed by camera 251 is not shown. FIG. 2(b) is drawn from the perspective of looking down on the camera system from the Z axis with the positive Z axis coming out of the page. Each cameras faces outward from the origin and point towards the center of a face of the cube. Thus, cameras 251 and 256 are on the Z-axis and face out of the page and into the page, respectively. Similarly, cameras 252 and 254 are on the Y axis and points up and down on the page respectively. Cameras 253 and 255 are on the X axis and point to the right and to the left of the page, respectively. Similar approaches can be used for each of the 4 other regular polyhedrons (i.e., tetrahedrons, octahedrons, dodecahedrons, and icosahedrons).
However, camera systems based on regular polyhedrons have poor utilization of the image data provided by standard cameras. Specifically, as illustrated in FIG. 3(a), standard cameras provide a rectangular image 310 having a long side 315 and a short side 317. The ratio of width to the height of an image is defined as an aspect ration. Thus, the length of long side 315 and short side 317 is called the aspect ratio of rectangular image 310. Typical aspect ratios include 4:3 (1.33) and 16:9 (1.78). Regular polyhedron have faces formed by triangles, squares, or pentagons. The short side of rectangular image 310 must encompass the face of the polyhedron. Therefore, as shown in FIGS. 3(b)-3(d) most of the image data captured by conventional cameras are not used by an environment capture system. Specifically, FIG. 3(b) shows a square face 320 of a hexahedron within rectangular image 320. Similarly, FIG. 3(c) shows a triangular face of a tetrahedron, octahedron, or icosahedron within rectangular image 310 and FIG. 3(d) shows a pentagonal face of an dodecahedron within rectangular image 310. Typically, the short side of rectangular image 310 is slightly larger than the polyhedral face to allow some overlap between the various cameras of the camera system. The overlap allows for minor alignment problems which may exist in the camera system. An environment capture system would only use the data within the faces of the polyhedron while the rest of rectangular image 310 is not used. Thus, only a small portion of the image data captured by each camera is utilized to generate the environment map. Consequently, even the use of multiple cameras arranged using regular polyhedrons may not provide enough resolution for quality environment mapping systems. Hence, there is a need for an efficient camera system for use with environment mapping and immersive videos.