Systems that rely upon sensing optical energy to discern information are known in the art and have many applications. Exemplary applications might include an optical-based system to determine range between the system and a target object, or to identify and recognize features of a target object. Many such systems acquire two-dimensional or intensity-based information, and rely upon an intensity image of light reflected from a target object. Such luminosity-based systems can use ambient light falling upon the target object, or may actively generate light that is directed toward the target object.
Unfortunately, it is difficult to accurately determine distance solely from the amplitude and brightness of an intensity image. For example, in a range finding system, a highly reflecting target object that is father away from the system can produce a greater amplitude signal than a nearer target object that is less reflective. The result would be that the more distant, shiny, object is erroneously reported as being closer to the system than the closer, duller, object. In a range finding system used to control robot machinery in an industrial setting, such errors may be intolerable for reasons of safety to nearby human operators. If such a system is used to identify and recognize different target objects, an object might be misidentified. Simply stated, two-dimensional intensity-based systems are very prone to measurement error.
U.S. Pat. No. 6,323,942 to Bamji et al. (November 2001) entitled “CMOS-Compatible Three-Dimensional Image Sensor IC” describes a three-dimensional range finding system that can determine range distance without reliance upon luminosity-based data, the entire content of which patent is incorporated herein by this reference. As disclosed in the '942 patent, such a system generates a depth map that contains the distance Z from each pixel in a CMOS-compatible sensor array to a corresponding location on a target object.
FIG. 1A is a block diagram of a three-dimensional range finding system 10 as exemplified by the '942 patent. Such systems determine distance Z between the system and locations on target object 20 by determining the amount of time for a light pulse to be emitted by the system, to reflect off the target object, and be detected by the system. Such systems commonly are referred to as time-of-flight or TOF systems. System 10 may be fabricated upon a single IC 30, requires no moving parts, and relatively few off-chip components, primarily a source of optical energy 40, e.g., a light emitting diode (LED) or laser source, and associated optics 50. If desired, laser source 40 might be bonded onto the common substrate upon which IC 30 is fabricated.
System 10 includes an array 60 of pixel detectors 70, each of which has dedicated circuitry 80 for processing detection charge output by the associated detector. At times herein, the terms “detector”, “photodiode detector” (because of its somewhat equivalent function), “photodetector”, “pixel” and “pixel detector” may be used interchangeably. More rigorously, the term “photodetector” may be reserved for the single-ended or more preferably differential photodetectors, e.g., the semiconductor devices that output detection current in response to incoming detected optical energy. In the spirit of such more rigorous definition, “pixel” or “pixel detector” would refer to the dedicated electronics associated with each single-ended or differential photodetector. In other usages, “pixel” may refer to the combination of a photodetector and it dedicated electronics. Using this terminology, array 60 might include 100×100 photodetectors 70, and 100×100 associated detector processing circuits or pixels 80, although other configurations may be used. IC 30 preferably also includes a microprocessor or microcontroller unit 90, RAM and ROM memory, collectively 100, a high-speed distributable clock 110, and various computing and input/output (I/O) circuitry 120. System 10 includes analog-to-digital conversion functions, and for purposes of the present invention, let it be understood that such functions are subsumed within I/O circuitry 120, as are some video gain functions. System 10 preferably further includes a lens 130 to focus light reflected from target object 20 upon pixels 70 in array 60. Controller unit 90 may carry out distance-to-object and object velocity calculations and can output such calculations as DATA, for use by a companion device, if desired. As seen in FIG. 1A, substantially all of system 10 may be fabricated upon CMOS IC 30, which enables shorter signal paths, and reduced processing and delay times. Also shown in FIG. 1A is ambient light that is present in the environment in which system 10 and target object 20 are found. As described herein, high levels of ambient light relative to levels of light from energy source 40 can be detrimental to reliable operation of system 10.
In brief, microprocessor 90 can calculate the roundtrip time for optical energy from source 40 to travel to target object 20 and be reflected back to a pixel 70 within array 60. This time-of-flight (TOF) is given by the following relationship:Z=C·t/2 where C is velocity of light.  eq. (1):
Thus, without reliance upon luminosity information, system 10 can calculate that Z1=C·t1/2, Z2=C·t2/2, Z2=C·t3/2, and so on. The correct Z distances are obtained, even if more distant regions of target object 20 happen to be more reflective than nearer regions of the target object.
The ability of system 10 to determine proper TOF distances Z can be impacted when the magnitude of ambient light is large relative to the magnitude of reflected light from source 40. What occurs is that the various pixels 70 respond to incoming optical energy that represents the real signal to be measured (e.g., active energy originating from source 40 and reflected by target object 20), and also respond to ambient light. The depth resolution of each pixel, i.e., the accuracy of the distance measurement, is determined by the system signal-to-noise ratio (S/N). Even if ambient light could be measured and subtracted from the total signal, its noise component (e.g., shot noise) would still degrade system performance. Further, the presence of ambient light can have even more severe consequences by causing the pixel detector to saturate.
A differential pixel photodetector is a detector that receives two input parameters and responds to their difference. With reference to TOF type systems, the active optical energy emitted by the system contributes to both a differential mode signal and a common mode signal, while ambient light contributes only to the common mode signal. Differential pixel detectors can exhibit higher signal-to-noise ratio than single-ended pixel detectors. However the presence of strong ambient light, sunlight perhaps, can degrade the performance of differential pixel detectors.
Differential pixel photodetectors will now be described with reference to U.S. Pat. No. 6,580,496 to Bamji et al. (June 2003) entitled “Systems for CMOS-Compatible Three-Dimensional Image Sensing Using Quantum Efficiency Modulation”. The '496 patent describes the use of quantum efficiency modulation techniques and differential detectors suitable for a three-dimensional range finding systems. The quantum efficiency of the substrate upon which differential CMOS sensors were fabricated was modulated synchronously with the active optical energy emitted from an energy source. Relative phase (φ) shift between the transmitted light signals and signals reflected from the target object was examined to acquire distance z. Detection of the reflected light signals over multiple locations in the pixel array resulted in measurement signals referred to as depth images.
FIG. 1B depicts a system 100 such as described in the '496 patent, in which an oscillator 115 is controllable by microprocessor 160 to emit high frequency (perhaps 200 MHz) component periodic signals, ideally representable as A·cos(ωt). Emitter 120 transmitted optical energy having low average and peak power in the tens of mW range, which emitted signals permitted use of inexpensive light sources and simpler, narrower bandwidth (e.g., a few hundred KHz) pixel photodiode detectors (or simply, photodetectors) 140′. System 100, most of which may be implemented upon a CMOS IC 30′ will also include an array 130′ of differential pixel photodetectors 70 and associated dedicated electronics 80. It will be appreciated that optical energy impinging upon array 130′ includes a fraction of the emitted optical energy that is reflected by a target object 20, which reflected energy is modulated, and also includes undesired ambient light, which is not modulated. Unless otherwise noted, elements in FIG. 1B with like reference numerals to elements in FIG. 1A may be understood to refer to similar or identical elements.
In system 100′ there will be a phase shift φ due to the time-of-flight (TOF) required for energy transmitted by emitter 120 (S1=cos(ωt)) to traverse distance z to target object 20, and the return energy detected by a photo detector 140′ in array 130′, S2=A·cos(ωt+φ), where A represents brightness of the detected reflected signal and may be measured separately using the same return signal that is received by the pixel detector. FIGS. 1C and 1D depict the relationship between phase shift φ and time-of-flight, again assuming for ease of description a sinusoidal waveform. The period for the waveforms of FIGS. 1C and 1D is T=2π/ω.
The phase shift φ due to time-of-flight is:φ=2·ω·z/C=2·(2πf)·z/C
where C is the speed of light 300,000 Km/sec. Thus, distance z from energy emitter (and from detector array) to the target object is given by:z=φ·C/2ω=φ·C/{2·(2πf)}
Various techniques for acquiring and processing three dimensional imaging have been developed by assignee herein Canesta, Inc. of Sunnyvale, Calif. For example, U.S. Pat. No. 6,906,793 (2005) to Bamji et al. describes Methods and Devices for Charge Management for Three-Dimensional Sensing, U.S. Pat. No. 6,522,395 (2003) to Bamji et al. discloses Noise Reduction Techniques Suitable for Three-Dimensional Information Acquirable with CMOS-Compatible Image Sensor ICs; and U.S. Pat. No. 6,512,838 to Rafii et al. (2003) discloses Methods for Enabling Performance and Data Acquired from Three-Dimensional Image Systems. But it still remains a challenge to provide a TOF system with differential pixel photodetectors that are protected from saturation, including saturation from differential mode signals, while enhancing signal/noise ratios.
It is useful at this juncture to review prior art implementations for differential pixel photodetectors. Such review will provide a better understanding of the challenges presented in protecting differential pixel photodetectors against saturation, while trying to enhance signal/noise ratios. In the '496 patent, differential detectors responded to amplitude of incoming optical energy and to phase of such energy relative to energy output by emitter 40. A comparison of FIGS. 1C and 1D indicates the nature of the shift in phase (φ).
Referring now to FIG. 2A, the singular term “pixel” is sometimes used collectively to refer to a pair of differential photodetectors, for example first and second photodiode detectors DA and DB as well as at least a portion of their dedicated electronics. With this understanding, what is shown in FIG. 2A is a pair 70 of pixel photodetectors, hundred(s) of which can comprise an array 130′, as suggested by FIG. 1B. Incoming optical energy falling upon a pixel detector 70 generates an extremely small amount of photocurrent (or photocharge), typically on the order of picoamps (10−12 amps). Such detection current signals are too small in magnitude to be measured directly. Pixel detectors can function in a direct integration mode in which optical energy induced photocurrent is integrated. Integration can result using an integration capacitor, where the final capacitor charge or voltage is readout at the end of an integration interval. A capacitor Cx has finite maximum charge capacity Qmax defined by:Qmax=Cx·Vswing  eq. (2):where Cx is the total capacitance and Vswing is the maximum voltage swing across the capacitor. A pixel photodetector is said to be in saturation when the total charge integrated on the capacitor exceeds the maximum charge capacity, in which case no useful information can be readout from that pixel photodetector.
A differential pixel photodetector (e.g., detectors 70 in FIG. 1B) may be represented as shown generically in FIG. 2A, in which modulation circuitry has been omitted for simplicity. Each pixel photodetector 70 has a differential structure with two perhaps identical reset and readout circuit components denoted A and B. Components A and B may be considered as part of the pixel photodetector 70 or as part of the pixel's associated circuitry 80. For ease of depictions, the photodetector pair comprising each differential pixel 70 is shown as photodiodes DA and DB, but other detector structures could be used instead, for example photogate structures. Capacitors CA and CB are shown in parallel with diodes DA and DB and represent detector parasitic capacitance and/or dedicated fixed value capacitors.
Referring briefly to FIG. 1B, within system 100 microprocessor 160 commands generator 115 to cause optical energy source 120 to emit pulses of light that are directed by lens 50 toward target object 20. Some of this optical energy will be reflected back towards system 100 and will be focused by lens 135 onto pixel photodetectors 70 within array 130. Incoming photon energy falling upon a detector 70 will cause photodetector pair DA and DB to generate a small amount of detection signal current that can be directly integrated by capacitors CA and CB. Before the start of integration, microprocessor 90, which may (but need not be) implemented on IC chip 30, will cause photodetectors DA and DB and their respective capacitors CA and CB to be reset to a reference voltage Vref. For the components shown in FIG. 2A, reset is caused by raising a reset signal Φreset (see FIG. 2B). During the integration time, photocurrent generated by detectors DA and DB respectively discharge associated capacitors CA, CB, as shown in FIG. 2B. During the integration time, the voltage seen at nodes SA, SB will decrease as a function of the photocurrent generated by the associated photodiode DA, DB. The magnitude of the photodiode-generated photocurrent will be a function of the amount of light energy received by the respective pixel 70 in array 60 in that the amount of light received by the pixel determines the final voltage on nodes SA and SB.
Readout circuitry is provided for circuit A and B, comprising transistors Tfollower and Tread. At the end of the integration time, which will be a function of the repetition rate of the optical pulses emitted from optical energy source 40, microprocessor 90 causes a readout signal Φread to go high. This enables the voltages on nodes SA and SB to be read-out of array 60, e.g., through a bitline. In the exemplary configuration of FIG. 2A, if the voltage on node SA or SB drops below a certain level denoted here as saturation voltage Vsat, the readout circuit cannot perform the reading operation properly. Therefore the dynamic range of such known differential pixel configuration shown in FIG. 2A is (Vref−Vsat), as depicted in FIG. 2B. While the waveforms in FIG. 2B depict a diminishing potential at nodes SA, SB as a function of photocurrent, one could instead configure the detector circuitry to charge rather than discharge a reference node potential.
But in addition to generating photocurrent in response to optical energy or active light (from emitter 40) reflected by target object 20, pixel 70 will also generate photocurrent in response to ambient light that is also integrated by capacitors CA, CB, thus affecting the potential at nodes SA, SB. FIG. 2B depicts two examples, showing the effect of relatively low magnitude ambient light and relatively high magnitude of ambient light. In range finding applications, the difference (Afinal−Bfinal) generally contains range information, and common mode is of lesser importance. As shown in FIG. 2B, relatively weak ambient light does not cause the pixel to saturate, and at the end of integration time, the final voltages read-out from the pixel are above Vsat. But relatively strong ambient light discharges the associated capacitor potential rapidly, which saturates the pixel. Due to the saturation condition, the pixel does not output any useful result in that the differential voltage, which contained range information, is now zero. Thus, a very real problem with prior differential pixel detectors is that the dynamic range of the pixel is not sufficient to handle strong ambient light.
Thus, whereas CMOS sensors used in systems to acquire images generally rely upon strong levels of ambient light, CMOS sensors used in time-of-flight systems seek to reduce the effects of ambient light. As seen in FIG. 2B, the magnitude of ambient light can overwhelm detection of reflected optical energy, saturating the detectors. Image acquisition systems and time-of-flight systems that must function in environments exposed to strong ambient light or minimal ambient light may require a sensor dynamic range exceeding about 100 dB. In time-of-flight and similar applications in which ambient light is unnecessary, the detection effects of ambient light can be substantially reduced electronically.
There is a need for a method and topology by which the dynamic range of a differential pixel detector can be enhanced without sacrificing a substantial portion of the desired differential signal. Preferably saturation of the differential pixel detector should be substantially eliminated, even from high magnitudes of the desired differential signal. Further, signal/noise ratio for the detection signal path should be enhanced. These goals preferably should be met using additional circuitry that can function with existing detector circuitry and that can be implemented to fit within the perhaps 50 μm×50 μm area of a pixel differential photodetector.
Embodiments of the present invention provide such methods and circuit topologies.