It is desirable both with still cameras and with moving image cameras also to acquire information on the depth of field or depth information of the motif taken in addition to the two-dimensional image information. The availability of depth information is sensible for various applications. Support for or automation of the focusing process of the objective and applications in the field of visual effects, which are produced digitally in postproduction with the aid of computers on the basis of real image data, can be named here by way of example. Examples for visual effects are the releasing of objects due to their distance from the taking camera and the spatially correct positioning of computer-generated objects in the real scene taken by means of the camera. The presence of a complete depth map for the respective image is desirable for such visual effects, i.e. depth information should be present for each picture element, where possible, so that the virtual object can be inserted into the real scene in a manner as close to reality as possible.
Different processes are generally known for the acquisition of depth information. An example for this is stereo triangulation in which, in addition to a main imaging beam path, a second beam path axially offset thereto and having a separate objective and image sensor is provided. Since the objective in the second beam path is independent of the objective in the main beam path, differences between the two objectives or also shadowing can in particular occur with respect to focusing or the picture angle. This impairs the accuracy of the triangulation calculation.
Further processes for acquiring depth information use structured lighting, with the scene to be taken being lit using a known pattern and being taken by a camera at an angle of view different from the lighting angle. Other processes are based on the principle of time-of-flight measurement in which the scene to be taken is lit by modulated light or is scanned by means of a laser beam.
In the paper “Image and Depth from a Conventional Camera with a Coded Aperture” by Levin et al., published in “ACM Transactions on Graphics”, Vol. 26, Issue 3, 2007, whose disclosure is included in the subject matter of the present application, a method is described for the acquiring of depth information in which a so-called coded aperture is provided in the region of the objective used for the image taking. Such a coded aperture comprises a mask in which impermeable masking sections are arranged in a structured manner.
The basic principle of the method described in the aforesaid paper by Levin et al. is based on the effect that a lens or an objective only images a dot in the object space (corresponding to a dot-shaped “light source”) as a dot in the picture plane when the dot is located in the focusing plane of the lens or of the objective. If the dot or the dot-shaped “light source” is, in contrast, located at a different distance from the lens or from the objective, the dot is imaged as blurred and a circle of confusion is generated in the image plane whose diameter is proportional to the distance of the light source from the focusing plane.
It is possible by a deconvolution of the taken images to subtract the blur caused by the depth-dependent defocusing described above out of the image, with information on the degree of blur and thus, as a consequence thereof, also on the distance of the taken objects simultaneously being able to be acquired. An optical imaging can namely basically be described as a convolution of a function describing the object to be described with a kernel, with the kernel describing the optical properties of the imaging element.
In the method according to Levin et al, the kernel is characterized inter alia by the structure of the coded aperture. Ultimately, the use of a coded aperture having a relatively complex structure in comparison with the circular or iris-type aperture usual with conventional objectives increases the accuracy in the calculation of the deconvolution or makes its calculation possible at all.
The demands on the structure of a coded aperture are described in more detail in the paper by Levin et al. They are satisfied by a structure such as the coded aperture or mask shown by way of example in FIG. 2 has. In this mask, transparent sections and masking sections non-permeable to radiation and shown hatched in FIG. 2 alternate.
On the taking of images while using coded apertures, however, the influencing of the figure of confusion, which is desired per se for the acquisition of the depth information, by objects not located in the focusing plane can result in esthetically unwanted effects. Light sources imaged in a blurred manner, in particular highlights, thus have unusual structures which are not circular, but are rather modulated by the structure of the mask. These effects can admittedly subsequently be calculated by calculation. However, these effects are disturbing at least during the taking on the use of an electronic viewfinder (direct reproduction of the taken image).
The aperture area or free opening of the objective is further reduced by the masking sections and its speed is thus decreased.