Many different fields of endeavor have a need to image extended fields of view with high resolution to detect and observe objects within the field of view or track movement relative to reference points. For example, observational astronomy, celestial navigation systems, and security/surveillance applications all need to monitor extended fields of view with high resolution. Image sensors are limited by a tradeoff between field of view and resolution: with a finite number of pixels on the sensor the sampling resolution in object space (i.e., the number of pixels devoted to a given area in the field of view being imaged) is decreased as the field of view is increased. When requirements demand a combination of extended field of view and resolution that exceeds a conventional single-camera fixed field of view architecture, these needs are often met using arrays of multiple cameras or image sensors arranged to view different regions of a scene, or using a single sensor or pixel array with a scanning mechanism (e.g., a pan-tilt-zoom mechanism) to sweep out a high-resolution image of an extended field of view over time. The former is bulky and costly because it requires discrete optical and sensor assemblies for each region of the field of view. The latter suffers from the need for a scanning mechanism and intermittent temporal sampling (i.e., the device cannot view the entire field of view at any one time). Other designs incorporate both a bank of cameras and scanning mechanisms to improve upon some aspects of dedicated array or scanning devices, but these hybrid devices also suffer the disadvantages of both.
Other fields endeavor to create a stereo image or a 3-dimensional (3D) depth image of a scene. This can be done using two or more cameras that observe an object from different perspectives, or with a single camera that produces images from two or more perspectives on a single focal plane. The former method suffers from the added cost, power, volume, and complexity of using multiple cameras, as well as geometric and intensity differences in the images resulting from the different optical systems. Methods using a single camera approach typically either (a) use prisms or mirrors to produce two or more shifted images on a camera's focal plane where each image fills only a fraction of the focal plane's area to prevent overlap, thereby resulting in a reconstructed stereo image that has a smaller field of view and fewer pixels than are available in the image sensor, or (b) use a moving element that allows a sequence of frames to be captured from different perspectives. This latter approach is more complex and restricts the sampling rate of the system.
Optically multiplexed imaging is a developing field in the area of computational imaging. Images from different regions of a scene, or from different perspectives of the same region, are overlaid on a single sensor to form a multiplexed image in which each pixel on the focal plane simultaneously views multiple object points, or the same object point from multiple perspectives. A combination of hardware and software processes are then used to disambiguate the measured pixel intensities and produce a de-multiplexed image. For a system with N multiplexed channels, the resulting image has N-times greater pixels than the format of the image sensor used to capture the multiplexed image. This technique allows a multiplexed imaging device to increase its effective resolution (i.e. the number of pixels in the reconstructed image), which can then be applied to extending the field of view or capturing images from multiple perspectives without resolution loss.
Prior designs of multiplexing imaging devices have their own drawbacks, however. For example, early conceptual designs utilized a multiple lens imager optical system where each lens focuses on the same image sensor. This configuration is likely to suffer defocus from tilted image planes and keystone distortion, however, in addition to its questionable savings in cost over a more traditional array of imaging sensors. Further, systems that utilize full-aperture beam splitters to combine various fields of view require large multiplexing optics and suffer loss due to escaping light from imperfect beam splitting. Still further, some prior designs utilize prisms to divide a field of view, but these systems are limited in their ability to image wide fields of view due to the fact that prisms can only be steered through small angles because of optical dispersion. In addition, many prior multiplexing designs utilize a form of scanning wherein each narrower field of view is sequentially captured by an imaging sensor, meaning the various fields of view are not simultaneously multiplexed onto the imaging sensor (e.g., similar to the moving element stereo imaging devices described above).
Multiplexing is also utilized in certain stereo imaging devices, but it is based on spectral multiplexing, which is a type of optically multiplexed imaging in which two or more images containing different spectrums of light are multiplexed into an optical device and the superimposed image is separated using color filters at the focal plane of the camera. Devices utilizing this approach suffer from the disadvantage of excluding portions of the spectral waveband, as well as loss of pixel resolution due to the color filter mosaic at the image plane.
Accordingly, there is a need in the art for improved devices and methods for optically multiplexed imaging. In particular, there is a need for improved devices and methods that provide for imaging an extended field of view without the disadvantages associated with assembling a large format array of imaging sensors, employing a slow moving scanning mechanism, or multiplexing in a manner that sacrifices resolution or other information capture (e.g., loss of spectral waveband portions, etc.).