1. Field of the Invention
The present invention relates to distance measurements in general and to a distance measurement as may be employed in 3D cameras in particular.
2. Description of Prior Art
Image-capturing sensors offer a way of detecting a depth in a non-tactile way. This may, for example, take place using extremely short shutter times in near-infrared (NIR) light pulse runtime measurements. Today, for example, CMOS cameras having active NIR illumination detecting three-dimensional object information by means of pulsed electromagnetic radiation are available. Three-dimensional distance images of the object captured are generated by capturing an image with short exposure. An example of such a 3D sensor is described in DE 19757595 A1.
FIG. 9 is to illustrate the measuring principle of these 3D sensors. An optoelectronic CMOS sensor 902 the pixels of which may be read out at random and the integration time of which is adjustable pixel by pixel is used. Optics 904 images the object 906 to be measured onto the sensor 902.
The object 906 is illuminated by a pulsed light source 908 with one or several very short light pulses 910, whereupon light pulses of equal length are scattered back from the object 906. These light pulses scattered back are guided via the optics 904 to the CMOS sensor 902.
Due to the different distances of different object points to the sensor 902, the light impulses scattered back corresponding to these object points will arrive at the sensor at different times. In order to measure a distance, a time measuring or exposure window corresponding to a predetermined integration time is opened at the pixels of the sensor 902. Emission times and integration times are controlled and synchronized by control means 912. The first light impulse scattered back incident in the sensor 902 is absorbed more or less completely when the integration time matches the emission time. The light impulses arriving in a time-offset way due to the greater distance of the object point from the sensor 902 are not absorbed completely but cut off at the back. In this way, the different runtimes and thus the distances of the respective pixels from their object points may be determined using the different charges collected in the individual pixels of the sensor 902 during the integration time. A three-dimensional distance image can be calculated therefrom by the control or evaluating means 912.
The measuring system illustrated in FIG. 6 consequently combines extremely short shutter times with light pulse runtime methods. Using a CMOS camera according to FIG. 9, not only can cameras having intelligent pixels which, apart from the standard image capture, can also determine the presence of persons using movements or can even trace them using tracking be realized, but they also offer a way of realizing an optical depth measurement on the basis of the NIR runtime measurement, either for certain image areas or entire images. In this way, 3D-CMOS cameras able of combining 2D and 3D image shots may be realized.
By means of the method of the 3D distance measurement by means of CMOS image sensors, a user will particularly be able to electronically process three-dimensional image scenes in real time. The result of this is a number of fields of application. For example, three-dimensional inspection and placement systems depend on as many image information as possible for a reliable object recognition and classification. In automotive systems, the 3D distance measurement may take on monitoring tasks, such as, for example, interior recognition of motor vehicles including intelligent airbag triggering, theft protection, road recognition and early accident recognition. The 3D distance measurement may, however, also simply be used for topography measurements, as is shown in FIG. 6, or for recognizing persons or presence sensor technology. In particular in intelligent airbag triggering, the camera system, for example, has to solve the task of triggering the airbag with an offset intensity depending on the distance of the passenger. With 3D-CMOS image sensors, this is possible without causing problems. Industry thus has a high demand for such intelligent systems, which in turn means a considerable market potential for 3D cameras.
Existing 3D-CMOS image sensors for measuring distances or depths the measuring principle of which has been described referring to FIG. 6 largely are based on the functional principle of the active pixel sensor (APS). Here, as has been described above, the temporal opening of the exposure window or integration window of the pixel is synchronized with the pulsed resolution of the active scene illumination.
In order to illustrate this in greater detail, FIG. 10 shows the light intensity progress at the light source 908 and at two exemplary pixels of the sensor 902 in three graphs arranged one above the other, the x-axes of which represent the time axes and which are aligned to one another, and the y-axes of which represent the intensity of the pulsed reflected light at the position of the pixel in random units or the presence thereof. In the top graph, two successively emitted light impulses 920 and 922 are illustrated. In a way synchronized by the controller 912, an integration or exposure window is opened in the pixels of the sensor 902 simultaneously with the emission and having the same duration, in which the photocurrent generated therein is accumulated, as is indicated in the two bottom graphs 924 and 926 by broken lines, wherein the center graph indicates light received at a pixel 1 and the bottom graph indicates the light received at another pixel 2. The two reflected light pulses 928 and 930 resulting from the pulses 920 and 922 at the pixel 1 may be recognized in the center graph. As can be seen from the bottom graphs, the reflected light pulses 932 and 934 resulting at the other pixel 2 only arrive at the sensor 902 after a greater runtime difference tD2 than the runtime duration tD1 at the first pixel 1. The different overlapping of the reflected light pulse at the respective pixel with the exposure windows 924 and 926 results in different accumulated charges at the pixels which are read out at the end of each exposure window 924 and 926, respectively. In particular, the charge quantities Q1 (pulse 1) and Q1 (pulse 2) at the pixel 1 are greater than the charge quantities Q2 (pulse 1) and Q2 (pulse 2) at the pixel 2. Directly before each exposure window 924 and 926, the corresponding pixel is reset, a process in which the charge of the corresponding pixel is preset to a reference value or in which the capacitor pertaining to the photodiode of the pixel is charged to a predetermined value.
As has been described above referring to FIG. 6, the distances of the corresponding object point imaged onto the respective pixel should be determinable from the charge quantities Q1 and Q2 which correspond to the charge carriers generated due to the reflected light pulse, since the charge quantity Q basically linearly depends on the runtime offset tD1 and tD2 and these in turn depend, by 2R/vc, on the distance R, vc representing light propagation speed in the propagation medium and, in air, roughly corresponding to the speed of light c, so that the following applies:Q∝2R/vc
However, different problems result in deviations from the theory. When detecting the desired pulse light, a portion of undesired background light will also always be detected. Furthermore, the reflectivity of the scene object influences the portion of the light reflected. These factors sometimes considerably corrupt the useful signal, namely the charge quantities Q1 and Q2, depending on the distance of the object. In order to obtain uncorrupted distance information, correction measures are required. The DE 19757595 A1 mentioned above suggests capturing two shots for normalizing surface reflection, namely one with the short photo-capturing time described above and another one with a sufficiently long photo-capturing time to detect the reflected pulses in their entirety in the exposure window, wherein the difference of the two shots, divided by the shot with long exposure, results in a normalized distance image. It is suggested to suppress background light to perform another short-time and long-time measurement in addition to the above short-time and long-time measurements, but without illumination, and to subtract these shots from the corresponding one even before calculating the normalized distance image.
In spite of these corrections, it is necessary for a sufficiently high precision of the distance measurement to accumulate entire pulse sequences on the pixel structure to achieve a useful signal-to-noise ratio in this way. However, this limits the bandwidth of the system.
It is of disadvantage in the measuring system illustrated above that it cannot operate with sufficient reliability in all fields of application. In particular in intelligent airbag triggering mentioned above and in road recognition, high reliability demands are made to the distance system. When an airbag is not triggered, this might have fatal consequences, as does a malfunction in road recognition. The 3D-CMOS distance measuring systems illustrated above, for example, only fulfill the reliability criteria required entailing high complexity because they are susceptible to fog or rain situations and thus are not able to reliably determine the distance to the vehicle in front. The 3D-CMOS distance measuring systems illustrated above which typically operate with visible laser light or laser light active in the human eye with a wavelength of about 900 nm, would require, when externally monitoring a motor vehicle, much higher a pulse power to be able to reliably determine the distance, which is prohibitive for reasons of eye protection. In particular in airbag triggering where the person to be protected is illuminated, the demands on eye protection are a technological obstacle.
Apart from the CMOS photodiode arrays described above for detecting the reflected pulses, there are of course also other receiver arrays, such as, for example, CCD chips. DE 19927694 C1 suggests, for detecting faint objects, to receive radiation in a layer sequence of metal photocathode, vacuum region, multi-channel plate, vacuum region and conductive pixel surface layer patterned into pixel regions. A second conductive layer is provided in a way insulated from the first insulated layer to fill the gaps of the first layer in the lateral extension. In this way, corresponding to DE 19927694 C1, a semiconductor element below the layer sequence including the semiconductor structures therein is protected from the photoelectrons which are ejected from the photocathode by means of the photoelectrical effect and accelerated onto the pixel layer.