Electronic camera and range sensor systems that provide a measure of distance from the system to a target object are known in the art. Many such systems approximate the range to the target object based upon luminosity or brightness information obtained from the target object. However such systems may erroneously yield the same measurement information for a distant target object that happens to have a shiny surface and is thus highly reflective, as for a target object that is closer to the system but has a dull surface that is less reflective.
A more accurate distance measuring system is a so-called time-of-flight (TOF) system. FIG. 1 depicts an exemplary TOF system, as described in U.S. Pat. No. 6,323,942 entitled CMOS-Compatible Three-Dimensional Image Sensor IC (2001), which patent is incorporated herein by reference as further background material. TOF system 100 can be implemented on a single IC 110, without moving parts and with relatively few off-chip components. System 100 includes a two-dimensional array 130 of pixel detectors 140, each of which has dedicated circuitry 150 for processing detection charge output by the associated detector. In a typical application, array 130 might include 100×100 pixels 230, and thus include 100×100 processing circuits 150. IC 110 also includes a microprocessor or microcontroller unit 160, memory 170 (which preferably includes random access memory or RAM and read-only memory or ROM), a high speed distributable clock 180, and various signal conversion, computing and input/output (I/O) circuitry 190. Among other functions, controller unit 160 may perform distance to object and object velocity calculations.
Under control of microprocessor 160, a source of optical energy 120 is periodically energized and emits optical energy via lens 125 toward an object target 20. Typically the optical energy is light, for example emitted by a laser diode or LED device 120. Some of the emitted optical energy will be reflected off the surface of target object 20, and will pass through an aperture field stop and lens, collectively 135, and will fall upon two-dimensional array 130 of pixel detectors 140 where an image is formed. In so-called time-of-flight (TOF) implementations, each imaging pixel detector 140 in a two-dimensional sensor array 130 can capture the time required for optical energy transmitted by emitter 120 to reach target object 20 and be reflected back for detection. The TOF system can use this TOF information, to determine distances z.
Emitted optical energy traversing to more distant surface regions of target object 20 before being reflected back toward system 100 will define a longer time-of-flight than optical energy falling upon and being reflected from a nearer surface portion of the target object (or a closer target object). For example the time-of-flight for optical energy to traverse the roundtrip path noted at t1 is given by t1=2·Z1/C, where C is velocity of light. A TOF sensor system can acquire three-dimensional images of a target object in real time. Such systems advantageously can simultaneously acquire both luminosity data (e.g., signal amplitude) and true TOF distance measurements of a target object or scene.
As described in U.S. Pat. No. 6,323,942, in one embodiment of system 100 each pixel detector 140 has an associated high speed counter that accumulates clock pulses in a number directly proportional to TOF for a system-emitted pulse to reflect from an object point and be detected by a pixel detector focused upon that point. The TOF data provides a direct digital measure of distance from the particular pixel to a point on the object reflecting the emitted pulse of optical energy. In a second embodiment, in lieu of high speed clock circuits, each pixel detector 140 is provided with a charge accumulator and an electronic shutter. The shutters are opened when a pulse of optical energy is emitted, and closed thereafter such that each pixel detector accumulates charge as a function of return photon energy falling upon the associated pixel detector. The amount of accumulated charge provides a direct measure of round-trip TOF. In either embodiment, TOF data permits reconstruction of the three-dimensional topography of the light-reflecting surface of the object being imaged.
Some systems determine TOF by examining relative phase shift between the transmitted light signals and signals reflected from the target object. U.S. Pat. No. 6,515,740 (2003) and U.S. Pat. No. 6,580,496 (2003) disclose respectively Methods and Systems for CMOS-Compatible Three-Dimensional Imaging Sensing Using Quantum Efficiency Modulation. FIG. 2A depicts an exemplary phase-shift detection system 100′ according to U.S. Pat. Nos. 6,515,740 and 6,580,296. Unless otherwise stated, reference numerals in FIG. 2A may be understood to refer to elements identical to what has been described with respect to the TOF system of FIG. 1
In FIG. 2A, an exciter 115 drives emitter 120 with a preferably low power (e.g., perhaps 250 mW to perhaps 10 W peak) periodic waveform, producing optical energy emissions of known frequency (perhaps a hundred MHz) for a time period known as the shutter time (perhaps 10 ms). Energy from emitter 120 and detected signals within pixel detectors 140 are synchronous to each other such that phase difference and thus distance Z can be measured for each pixel detector. Detection of the reflected light signals over multiple locations in pixel array 130 results in measurement signals referred to as depth images.
The optical energy detected by the two-dimensional imaging sensor array 130 will include amplitude or intensity information, denoted as “A”, as well as phase shift information, denoted as φ. As depicted in exemplary waveforms in FIGS. 2B, 2C, 2D, the phase shift information varies with distance Z and can be processed to yield Z data. For each pulse or burst of optical energy transmitted by emitter 120, a three-dimensional image of the visible portion of target object 20 is acquired, from which intensity and Z data is obtained (DATA′). As described in U.S. Pat. Nos. 6,515,740 and 6,580,496, obtaining depth information Z requires acquiring at least two samples of the target object (or scene) 20 with 90° phase shift between emitted optical energy and the pixel detected signals. While two samples is a minimum, preferably four samples, 90° apart in phase, are acquired to permit detection error reduction due to mismatches in pixel detector performance, mismatches in associated electronic implementations, and other errors. On a per pixel detector basis, the measured four sample data are combined to produce actual Z depth information data. Further details as to implementation of various embodiments of phase shift systems may be found in U.S. Pat. Nos. 6,515,740 and 6,580,496.
Many factors, including ambient light, can affect reliability of data acquired by TOF systems. As a result, in some TOF systems the transmitted optical energy may be emitted multiple times using different systems settings to increase reliability of the acquired TOF measurements. For example, the initial phase of the emitted optical energy might be varied to cope with various ambient and reflectivity conditions. The amplitude of the emitted energy might be varied to increase system dynamic range. The exposure duration of the emitted optical energy may be varied to increase dynamic range of the system. Further, frequency of modulation of the emitted optical energy may be varied to improve the unambiguous range of the system measurements.
In practice, TOF systems may combine multiple measurements to arrive at a final depth image. But if there is relative motion between system 100 and target object 20 while the measurements are being made, the TOF data and final depth image can be degraded by so-called motion blur. By relative motion it is meant that while acquiring TOF measurements, system 100 may move, and/or target object 20 may move, or the scene in question may include motion. For example, if shutter time is 10 ms, relative motion occurring faster than about 1/40 ms (for a four-sample acquisition) will produce motion blur. The undesired result is that the motion blur will cause erroneous distance Z data, and will yield a depth image with errors.
Furthermore, implementing such a TOF sensor system requires that each pixel in the pixel array receive a periodic clock signal from a clock generator. The clock signals must be distributed to each pixel in the array, typically via routing wires or leads on IC 110. In practice, the clock circuitry and interconnect route wiring can influence other portions of the sensor system, for example, causing current or voltage fluctuations on power, ground, reference, bias, or signal nodes in the system. These undesired fluctuations translate into depth measurement errors or ‘noise’ in the result and are referred to as signal integrity problems.
What is needed in TOF measurements is a method and system to detect and compensate for motion blur, preferably with reduced signal integrity problems.
The present invention provides such a method and system.