In a variety of fields, there is a growing need for capturing and/or projecting three-dimensional (3D) images. Specific applications range from medicine to entertainment. A 3D effect is perceived by presenting different images of a scene to the left and right eyes of a viewer. The center of each camera's entrance pupil is referred to as a point of view. The typical approach for 3D (stereoscopic) recording uses two or more spatially separated cameras that simultaneously record real-time changes of the same scene from different points of view. The 3D effect is dependent upon the different points of view of the multiple cameras. The spatial separation of the multiple cameras may include positional differences (shift) and angular differences (tilt). Positional differences account for parallax. Angular differences account for the angular direction of incident light, with respect to the scene, from which a camera captures the scene. This is also referred to as line of sight.
An example of such an arrangement is illustrated schematically in FIG. 1. Two spatially separated cameras 10, 12 have different respective positions and tilts relative to a scene of interest. The central optical axis 14 of camera 10 and the central optical axis 16 of camera 12 typically intersect at the location of the scene of interest such that the cameras provide two different simultaneous views of the scene from two points of view. As indicated in FIG. 1, the two cameras have a relative operational lateral shift d and a relative operational tilt a.
With a dual camera system such as illustrated in FIG. 1, it is necessary to maintain a good focus for each camera in order for the 3D image captured to have a good focus. For scenes having a dynamic character (image object motion in space over time) and/or a natural 3D (volumetric) profile, auto-focus capability and/or variable focus capability becomes necessary.
In conventional variable focus systems, focusing is achieved through the use of mechanically moving optical elements, typically spherical lenses. Typical optical element displacement is provided by stepper motor or voice coil drive arrangements.
Conventional 3D capture therefore requires mechanically moving optical elements for each camera. FIG. 2 illustrates a prior art dual camera system based on the dual camera system illustrated in FIG. 1 with cameras 10 and 12 directed at a scene of interest. A mechanical focus adjustment mechanism is employed for each camera, mechanism 18 being associated with camera 10 and mechanism 20 being associated with camera 12. Each focus adjustment mechanism must be individually actuated to provide a good focus of the scene which, as indicated in FIG. 2, is at a distance D from cameras 10, 12. It has been found that time-synchronization between multiple mechanical drive arrangements for optical element displacement is very difficult, for example due to mechanical movement, ringing and other inertial effects. Time-synchronization affects dual camera systems employed in a variety of applications.
A significant drawback of conventional focusing systems is that such mechanical focusing devices tend to be bulky and relatively expensive making them impractical/unsuitable for many applications, and in particular do not lend themselves well to miniaturization.
In addition to time synchronization difficulties, further undesirable problems arise in conventional focusing systems if the cameras are not perfectly telecentric, as the mechanical optical element displacement employed to adjust focus also changes image magnification. The following examples show such image magnification change with relative mechanical displacement between a conventional optical lens element and an image sensor. For ease of understanding, the optical lens element is shown stationary and the image sensor shown moving. Such arrangement is not uncommon in practical implementations. For certainty, the following treatment is dependent only the relative motion between the optical lens element and the image sensor and applies in an equal way to the more conventional implementation wherein the image sensor is stationary and the optical lens element is displaced with respect to the image sensor.
FIG. 3 illustrates a ray diagram of a conventional non-telecentric imaging system model having variable focus achieved by moving the image sensor (camera) with respect to the optical lens element. Providing focus control via displacement between the image sensor and a non-telecentric optical lens arrangement results in a higher magnification for objects imaged at close focus than for objects imaged at distant focus. As magnification of an imaged object on the image sensor is given by the ratio between the imaged object height and the tangent of the field angle, the non-zero chief ray angle causes different magnifications for the same imaged object at different image sensor locations. In practice, optical element displacement relative to the image sensor requires precise moving parts. A person of skill in the art would understand that FIG. 3 is idealized, in practice the optical lens shown is a compound optical element including a multitude of optical components including, but not limited to spherical or aspherical glass, crystal or plastic lens elements of considerable thickness. Miniaturization of such mechanical displacement focus systems is very difficult due to material limitations of glass lens elements.
Moreover, for dual camera 3D imaging, where each camera typically requires a different focus adjustment setting, magnification change with focus is further undesirable because the resulting image size differences (due to magnification) and image field extent differences (more or less of the scene fits in a same size image frame as viewed from the other point of view) affect image registration between the two images in the stereoscopic pair. Lack of registration is disturbing to a viewer (user). Moreover, with the two imaging channels being focused differently, magnification differences could affect stereo fusing (blending images).
FIG. 4 illustrates a ray diagram of a conventional telecentric imaging system model having variable focus achieved by moving the image sensor (camera) with respect to a glass lens. Providing focus control via displacement between the image sensor and a telecentric optical lens arrangement results in constant magnification for all imaged objects because the chief ray angle remains zero for all image distances. However, besides requiring precise moving parts for precise optical element displacement relative to the image sensor, extra conventional optical lens complexity is required to achieve such telecentric design. That is, the compound optical element employed has a higher complexity typically requiring more glass lenses which considerably increase thickness. Miniaturization of such mechanical displacement focusing systems and complex optical lens elements is very difficult due to material limitations.
There is a need to improve focusing in dual camera systems.