Conventional omnidirectional cameras (also known as virtual reality cameras, spherical cameras, panorama cameras, immersive video cameras, or 360 cameras) present design challenges. The purpose of omnidirectional cameras is to capture video in all directions surrounding the camera (i.e., 360 degrees in each axis). The video captured represents a complete view of a scene surrounding the person watching the video. A user typically uses a head-mounted display or an interactive video player to view the captured video on playback. The video orientation can be changed in any direction during playback.
The video provides the user with a spherical field of view of the scene surrounding the omnidirectional camera. A single lens cannot capture an entire spherical field of view. Conventional solutions include placing a convex mirror in front of the lens or capturing images from multiple lenses for several separate video signals. Using a mirror only provides a 360 degree horizontal coverage, while losing the top and bottom of the spherical field of view. When using multiple lenses, the multiple images are stitched together into a 360 degree intermediate representation. The multiple images need to have sufficient overlap so that overlapping areas can be blended together to offer a continuous and smooth representation of the scene surrounding the camera.
When multiple images are stitched together, the parallax of objects viewed by different cameras can create artifacts on the blended/overlapping areas. The parallax occurs because the objects are viewed differently (i.e., at different relative positions) by each camera. Blending artifacts are visible when viewing the spherical field of view and create a distraction from the user experience.
To avoid parallax related artifacts, two theoretical alternative conditions may be implemented. In one theoretical implementation for reducing parallax related artifacts, all objects are viewed (e.g., captured images) at a sufficient distance (e.g., theoretically an infinite distance). Viewing the objects at the sufficient distance may nullify the parallax related artifacts. However, an infinite viewing distance is not realistic in real camera implementations. In another theoretical implementation, a center of projection (e.g., a focal point, an optical center and/or convergence point) of all cameras share the same physical location. The center of projection may be a point at which initially collimated rays of light meet after passing through a convex lens. Generally, multiple cameras sharing the same location for the centers of projection may not be physically possible, because the center of projection of each camera is located somewhere between a lens and a sensor of the camera inside each camera module. The volume of space physically occupied by the cameras should intersect so that the respective focal points coincide (or at least be close to one another).
It would be desirable to implement an omnidirectional camera to minimize parallax effects.