There are various applications in which optical radiation is to be detected. An example of applications of this kind are 3D cameras. In this context, CMOS image sensorics offer the possibility of detecting depth by means of near infrared (NIR) light pulse propagation time methods in a non-tactile manner. The residual intensity of the laser light reflected by an object is measured here. Principally, there are two possible methods, pulse propagation time methods and methods including modulated light. In combination with very short exposure and/or shutter times or suitable demodulation signals, a measurement provides a residual quantity of the reflected light which is proportional to the distance of the object. 3D cameras are based on this principle.
Using CMOS technology, cameras including intelligent pixels which, except for standard imaging, exemplarily also determine or even track, using a tracking method, the presence of people using movement may be realized. CMOS cameras may even realize combinations of 2D and 3D imaging.
Using the method of 3D distance measurement by means of CMOS image sensors, three-dimensional image scenes can be processed electronically in real time. A multitude of applications result from this. Three-dimensional inspection and placement systems, for example, necessitate the highest possible degree of image information for reliable object recognition and classification, as is provided by the additional depth information of a 3D sensor. Further fields of application are in automotive systems, i.e. in the motor vehicle sector, for monitoring tasks, like, for example, monitoring the interior of a motor vehicle including intelligent airbag triggering, theft protection, lane recognition, accident and/or pre-crash recognition, methods for pedestrian protection and parking aids. Further fields of application include topography measurements, person recognition and presence sensorics. In particular in airbag control, the camera system for example has the task of providing reliable distance or spacing data, since the airbag has to be triggered by smaller a force depending on the distance of the passenger. This is possible using 3D-CMOS image sensors which provide depth information for every pixel.
FIG. 8 shows the basic structure of 3D measuring systems. An object 802 to be measured is irradiated with pulsed or modulated light 806 by a light source 804, the reflected light 808 being imaged on a pixel array 812, like, for example, a CMOS chip, by optics 810. Additionally, the system includes control means 814 which is coupled to the light source 804 and the pixel array 812 to control system operation when measuring.
In the case of a pulse method, the control means 814 for example drives the light source 804 such that the object 802 is irradiated by light pulses 806 which are synchronized in time with exposure slots of the pixel array 812. 3D-CMOS image sensors for distance and/or depth measurement here may exemplarily be based on the functional principle of an active pixel sensor in which the temporal opening of the exposure slot of a pixel is synchronized with the pulsed triggering of active scene illumination.
In the case of a modulation method, the control means 814 for example drives the light source 804 such that the object 802 is irradiated with modulated light 806, the received signal 808 being demodulated on the receiver side such that the phase difference between the transmitted signal and the reflected signal provides information on the distance.
The mode of functioning of the system of FIG. 8 when using pulsed irradiation under temporal synchronization of the exposure slots will be described exemplarily referring to FIG. 9. In FIG. 9, three graphs are shown, of which the top one shows the time wave form of the emitted light intensity of the light source 804, the time t being plotted in arbitrary units along the horizontal axis and the intensity in arbitrary units being plotted along the y axis. Below that, FIG. 9 shows two graphs representing a time wave form of the received light intensity at two different pixels, namely pixel 1 and pixel 2, of the pixel array 812, wherein again the time t is plotted along the x axis and the intensity is plotted along the y axis. As can be seen, the time range illustrated extends over two emitted light pulses 902 and 904. The exposure slots 906a, 906b and 908a and 908b are located in time synchronization to the light pulses 902 and 904, respectively, as is indicated in FIG. 9 by broken likes. As can also be seen from FIG. 9, the reflected light pulses 910a, 910b and 912a, 912b arrive at the pixels 1 and 2, respectively, at different times and/or with a different time offset tD1 and tD2, respectively, which depends on the distance of the respective object point to the optics 810 and the pixel array 812 imaged onto pixel 1 and pixel 2, respectively. Due to the different time offset and, in particular, due to the differently sized overlap between the respective exposure slots 906a-908b and the received reflected light pulse 910a-912b, different charge quantities Q1 and Q2 will result in the pixel structures of pixel 1 and pixel 2, respectively. In particular, the charge Q accumulated at every pixel is proportional to the distance r of the object point to the respective pixel, i.e. Q˜r/2c, c designating the speed of light.
Different problems of different origins arise in the procedure according to FIG. 9. A portion of undesired background light will be detected in connection with the desired reflected light useful signal. Furthermore, the reflectivity of the scene objects influences the portion of the light reflected. These factors sometimes corrupt the useful signal considerably, depending on the distance of the object, and need to be removed by measuring the object scenery several times, like, for example, by an additional measurement extending the exposure time slot and additional imaging with no illumination pulses present. Combining these measuring results will then result in a 3D measuring result free from reflectivity and background light corruption, wherein reference is made, for example, here to WO 99/34235 A1 where a system according to FIG. 8 using a CMOS-CCD camera is described.
Another problem is that the propagation time of the pulsed and/or modulated light 806 is very short and exemplarily is in the range of nanoseconds, this also being the reason for the fact that the charge quantity Q containing the depth information is small so that the signal-to-noise ratio as results by evaluating a pulse is relatively great. One way of improving the signal-to-noise ratio is accumulating the charge quantities of successive pulses on the pixel structure of the corresponding pixel or in an external circuit to thereby obtain an improved signal-to-noise ratio by signal averaging.
In the pulse method, for example, this way of improving the signal-to-noise ratio is basically possible, however the pixel has to be reset after each pulse of the sequence. In FIG. 9, a reset process exemplarily takes place at the beginning of the exposure slots 906a-908b, the reset being when a charge quantity to be read out at the end of the exposure slot is set to a predetermined value. However, the reset process itself generates a noise contribution to a considerable degree which, in turn, corrupts or reduces the signal swing obtained by a multiple pulse sequence.
Since in industry there is great demand for precise 3D measuring systems, as has been described in the introduction, in industry there is also demand for an improved detection of electromagnetic radiation for reducing the negative influence caused by reset processes.
In “CCD-Based Range Finding Sensor”, IEEE Transactions on Electronic Devices, Vol. 44, No. 10, Oct. 1997, pp. 1648-1652, a method for measuring distances by means of a CCD sensor is described. In this system, light pulses which, after reflection at the object, impinge on the photogate of a pixel are used. Below the photogate, photoelectrons are generated which, depending on the driving of two slot gates to one respective side of the photogate, are transferred to a first memory gate or a second memory gate. This alternating accumulation at the memory gates is performed over several periods.
WO 02/33817 A1 describes a method and a device for detecting and processing signal waves where every pixel has two photodiodes which are provided with a voltage alternatingly to demodulate modulated light signals in a manner phase-offset to each other. A similar procedure is described in WO 04/086087 A1.