Numerous applications exist for an immersive virtual reality environment. These include providing virtual tours of real estate, in which a user can select a particular area of a house or other property to be viewed more closely, security applications, theme park rides, surgical procedures, virtual travel tours, interactive television and many others. Prior art devices and procedures exist to create an immersive virtual reality environment. It is known how to map stored images in such a way as to create the immersive environment, but these techniques are mathematically very complex and time consuming to apply even with sophisticated computer hardware.
Currently in order to capture an image and use the virtual reality technology in conjunction with a wide-angle lens in a cost effective way, a user can record multiple small, essentially distortion free images that are stitched together at the edges to create one image. Wide-angle partial panoramic photos and 360° images can be recorded using either a standard camera or a digital camera. What this prior art does is to overlap the images to allow for the stitching and cropping using application software. The small images must be taken with approximately a 20-30% overlap to achieve a successful stitch and panoramic view. When using a standard camera, it is usually required to mount the camera on a tripod to achieve a common vertical reference point. This method does allow the virtual reality viewer to view the multiple images as one undivided image.
In order to make capturing images for virtual reality technology time efficient, less complex, and thus less expensive it is necessary to use lenses that can view larger angles of the environment. Lenses with a viewing angle between 120 and 180 degrees provide image information that capture larger parts of the data that is required to prepare an immersion view and eliminate the need to stitch together many small images. Currently, applications exist for displaying wide angle images, including those provided by fish-eye lenses, donut shaped lenses, and spherical lenses, but these applications are also too mathematically complex and time intensive to apply to real time situations.
Images captured by wide-angle lenses are difficult to display because they include significant aberrations that must be compensated for. There are primarily five common categories for the aberrations that a wide-angle lens produces. These categories are spherical aberration, chromatic aberration, astigmatism, distortion and field curvature. Distortion is the aberration most recognized and corrected for. Distortion is a two dimensional aberration that is ordinarily noticed when an image recorded by a wide-angle lens is projected on a plane surface. Most people recognize it by what some describe as a pincushion or barrel shaped distortion of the image.
One way that distortion from a wide-angle lens is corrected for when the image is displayed is by projecting the recorded image on a screen that has curvature. An example of this would be an OMNImax™ theater where the recorded image is displayed on a screen shaped like a dome. This technique is impractical for head mounted displays and for most low cost applications.
Current mathematical techniques for correcting for distortion in a displayed image use mathematical transforms that “flattens out” the displayed image in direction that is being viewed. This method requires sophisticated computer hardware for implementing the mathematical transformations as are discussed in U.S. Pat. No. 5,990,941 by Jackson, et. al., METHOD AND APPARATUS FOR THE INTERACTIVE DISPLAY OF ANY PORTION OF A SPHERICAL IMAGE (Jackson '941). Motorola has also attempted to simplify the complex mathematics needed to correct for distortions in U.S. Pat. No. 5,489,940 by Richardson, et. al., ELECTRONIC IMAGING SYSTEM AND SENSOR FOR CORRECTING THE DISTORTION IN A WIDE ANGLE LENS (Richardson '940). Richardson '940 uses a plurality of imaging elements distributed on the surface of a sensor that is represented by a nonlinear function, wherein the distribution of the imaging elements correct the distortion in the wide-angle image. The sensor has a rectangular array to transform a rectangular portion of the image produced by a wide-angle lens. Each of three sensing elements on the sensor has a unique two-dimensional address to allow the particular sensing element to be electronically accessed. In the Richardson '940 patent the address of an arbitrary sensing element can be represented by the coordinate pair (a, b). The physical location of a sensing element having an address (a, b) is given by (x, y) as follows:x=R sin((Ta/180)(n.sup.2+m.sup.2).sup.−½)  (1)andy=R sin((Tb/180)(n.sup.2+m.sup.2).sup.−½)  (2)where (n, m), (−n, m), (n, −m) and (−n, −m) are rectangular coordinates of the physical boundaries of the rectangular sensing array. Although the Richardson '940 patent is supposed to be a simplification over the prior art, it is readily evident that this prior art still involves quite a bit of mathematical computations.
Since current virtual reality techniques are so complicated and expensive these applications are limited to users where the cost can be justified. There is a need for a simpler approach that would allow the virtual reality technique to be used more widely in both non-real time and real time situations. These could include both static and video presentations. In non-real time a viewer can stop and look around at any given point in time, observing different sections of the image. In real time viewing, a viewer can appear to be observing a realistic, continuous image if sufficiently high video frame rates are used to present the information in conjunction with real time virtual reality, immersion technology.
Lenses that have a viewing angle greater than 120 degrees create greater amounts of distortion at the edges. Since the present methods of correcting for these distortions are complex, these prior devices have not achieved greater than 20 frames per second display rates in immersive virtual reality, environments such as those methods that use complex mathematical techniques that are discussed in Jackson '941. A minimum of 24 frames per second display rates are needed to provide a non-jumpy continuous display of images, which adds realism to the viewer in a virtual reality or immersion situation. It is important that deformation and flicker be minimized, if not eliminated, and that the distortion at the edges of the image be removed so that the quality can be such to provide a realistic image. There is a need for an image display system, especially for wide angle lenses, which corrects for distortions and is mathematically simple, not complicated or time intensive, and thus can run in real time with minimal resources.