Stereoscopic image capture of real world environments is of growing importance as 3D viewing devices, e.g., head mounted displays, capable of displaying different images to a users left and right eyes are becoming more common. For example, the capture of images of a sporting event or other event such as a concert using cameras and then transmitting the images to be used as textures on the surfaces of a simulated environment can give a user of a playback device a sense of being at the event.
To capture images of an event for such purposes wide angle lenses, such as a circular fisheye lenses, are often used. FIG. 1 illustrates the shape of a circular fish eye lens 100 as seen from the front. The circular fish eye lens is common and frequently used to capture images of an environment. Such a lens has a shape which approximates that of a half sphere. Circular fisheye lenses normally take in a 180° hemisphere and project this as a circle on the camera sensor such as the sensor 200 shown in FIG. 2. Camera sensors are normally rectangular with one side being longer than the other.
Circular fish eye lenses allow for the capture of a very wide angle, e.g., 180 or approximately 180 degrees in both the horizontal and vertical dimensions. This can be particularly useful when capturing images which are to be used in simulating a complete 360 degree environment in both vertical and horizontal directions since a full or nearly full view can be captured with as little as two lenses.
Unfortunately, the use of circular fish eye lenses have the drawback that they are inefficient in terms of available pixel sensor elements with much of the sensor area including pixel sensors being wasted. FIG. 2 shows how light 101 passing through the fish eye lens 100 will typically fall on a sensor 200 of a camera device when the fish eye lens 100 is used. Note that the pixel sensor outside the region 101 in which light falls are wasted leaving many of the pixel elements of sensor 200 unused.
In an attempt to address the failure of circular fish eye lenses to use the full area of a sensor, fisheye lenses that enlarged the image circle to cover the entire sensor area were developed with such lenses sometimes being referred to as full-frame fisheye lenses.
The picture angle produced by conventional full-frame fish eye lenses only measures 180 degrees when measured from corner to corner. Such lenses have a 180° diagonal angle of view, while the horizontal and vertical angles of view will be smaller. For an equisolid angle-type 15 mm full-frame fisheye, the horizontal FOV will be 147°, and the vertical FOV will be 94°
Such full-frame fisheye lenses treat the top and bottom portions of the environment equally with the same number of pixel elements of the sensor being allocated to the top horizontal portion of a FOV as to the the bottom horizontal portion of a field of view. It should be appreciated that conventional full-frame fish eye lenses tend to be symmetric with respect to the left and right portions of the field of view as well as the top and bottom portions of the field of view in the same manner that circular fish eye lenses are symmetric in the horizontal and vertical directions.
While conventional circular fisheye lenses and conventional full frame fisheye lenses have a wide range of applications, they are not as well suited to the capture of images of real world environments for virtual reality applications as many people would desire.
As should be appreciated, users of a virtual reality device tend not to give the same importance to all portions of an environment. For example, the ground may be of little interest to a user of a virtual reality device. In the case of the capture of images for virtual reality, both non-stereoscopic and/or stereoscopic virtual reality, it would be desirable if different portions of the environment could be captured in a manner that uses sensor pixel elements in a way that reflects the relative importance of various portions of the environment to a user of a virtual reality device, e.g., with higher priority portions of an environment being captured at higher resolution than lower priority portions.
Some virtual reality applications do not involve the use of stereoscopic images or use stereoscopic image pairs generated synthetically from images captured by multiple cameras and depth information. However real time 3D image content capture and streaming often involves the use of camera pairs to capture images which are intended to serve as left and right eye images. Slight differences in the left and right images of a stereoscopic image pair provide depth information allowing a user viewing different left and right images to perceive the images in 3D. Such capture of stereoscopic image pairs can avoid the need for manipulating an image to synthetically generate a corresponding image to be used as one image of a stereoscopic image pair. Such processing can be computationally difficult and time consuming making such processing unsuitable for some real time applications.
While not important for all applications, it would be desirable if at least some consideration were given in the development and use of wide angle lenses to how they operate in a stereoscopic pair and methods and/or apparatus be developed which facilitate the use of wide angle lenses in a stereoscopic camera pair.
In view of the above discussion, it should be appreciated that there is a need for improved methods and/or apparatus relating to wide angle lenses or devices which use such lenses in capturing images intended to support virtual reality applications whether such applications are non-stereoscopic or stereoscopic applications.