System designers are often faced with predicting system performance, which prediction should take into account the effect of noise at various steps or stages in the system process. An unduly pessimistic prediction of system performance can result in an over-design that generally is more expensive to provide than if a more accurate prediction of system performance could have been made. For example in a depth imaging system, it is desired to meet certain design specifications that are provided a priori. However if the predicted system performance is too pessimistic, the designer may compensate by generating more illumination power than is reasonably necessary, a solution that adds cost to the design, and consumes additional operating power. On the other hand if the predicted system performance is too optimistic, the end design may fail to meet specification.
System performance may be thought of as a sequence of processing steps that collectively represent a processing pipeline. A challenge then is how best to account for the effect of noise upon a signal/noise (S/N) ratio at various steps in the processing pipeline. Noise from independent sources is typically not correlated and is generally accounted for using a root-mean-square (RMS) approach. Since the RMS is always positive, in a typical noise analysis model, the contribution of noise is additive and increases at each stage in the processing pipeline. However not all noise is uncorrelated as portions of each noise may even originate from the same noise source. Under such circumstances an RMS approach yields inaccurate often overly pessimistic results.
In very simply systems in which there are but a few processing steps, a closed form symbolic representation approach is often used. An equivalent RMS noise model may be used for each stage. However in more complicated systems including imaging or depth systems, a simulation based model is used. In a simulation based model, the effect of injected noise, i.e., a random variable, into each stage of the processing pipeline is considered, and the system output is examined multiple times, perhaps thousands of times to get a good statistical data-set of the noise. Understandably it is challenging to attempt to accurately and separately model noise at every stage in the pipeline processing system. In prior art techniques, in a simulation based method, separation of noise and signal is not maintained. In a simulation based model, the signal and multiple noise contributions are added together and propagated through the sequence of operations as a single variable
Embodiments of the present invention work well with systems that include imaging and depth systems. Thus, it is useful at this point to briefly describe such systems, with which embodiments of the present invention may be used.
Conventional color (red, green, blue or RGB) imaging cameras are well known in the art. Such cameras typically use ambient light to image a scene onto a sensor array comprising many RGB pixels, and produce a color value at each pixel in the sensor array. On the other hand, comparatively newer so-called three-dimensional depth or imaging cameras or systems are more sophisticated and use active light to image a scene onto a sensor array comprising many depth (or z) pixels. (The terms TOF systems and TOF cameras may be used interchangeably herein.) Such cameras produce a z distance value at each pixel in the sensor array. Some depth systems have been developed using spaced-apart stereographic cameras. However these systems experience ambiguity problems associated with two spaced-apart cameras viewing a target object, and acquiring two images from slightly different vantage points. A superior type of prior art three-dimensional system (or camera) may advantageously be constructing using time-of-flight (TOF) technology.
A relatively accurate class of depth or Z distance TOF systems has been pioneered by Canesta, Inc., assignee herein. Various aspects of TOF imaging systems are described in the following patents assigned to Canesta, Inc.: U.S. Pat. No. 7,203,356 “Subject Segmentation and Tracking Using 3D Sensing Technology for Video Compression in Multimedia Applications”, U.S. Pat. No. 6,906,793 “Methods and Devices for Charge Management for Three-Dimensional Sensing”, and U.S. Pat. No. 6,580,496 “Systems for CMOS-Compatible Three-Dimensional Image Sensing Using Quantum Efficiency Modulation”, U.S. Pat. No. 6,515,740 “Methods for CMOS-Compatible Three-Dimensional image Sensing Using Quantum Efficiency Modulation”.
FIG. 1 depicts an exemplary TOF system, as described in U.S. Pat. No. 6,323,942 entitled “CMOS-Compatible Three-Dimensional Image Sensor IC” (2001), which patent is incorporated herein by reference as further background material. TOF system 10 can be implemented on a single IC 110, without moving parts and with relatively few off-chip components. System 10 includes a two-dimensional array 130 of Z or depth detectors 140, each of which has dedicated circuitry 150 for processing detection charge output by the associated detector. Collectively a detector 140 and its dedicated circuitry 150 comprises a depth pixel, or simply pixel. In a typical application, pixel array 130 might include 100×100 pixels 140, and thus include 100×100 processing circuits 150. (Sometimes the terms pixel detector, or pixel sensor, or simply pixel or sensor are used interchangeably.) IC 110 preferably also includes a microprocessor or microcontroller unit 160, memory 170 (which preferably includes random access memory or RAM and read-only memory or ROM), a high speed distributable clock 180, and various computing and input/output (I/O) circuitry 190. Among other functions, controller unit 160 may perform distance to object and object velocity calculations, which may be output as DATA.
Under control of microprocessor 160, a source of active optical energy 120, typical IR or NIR wavelengths, is periodically energized. Energized source 120 emits optical energy Sout via lens 125 toward an object target 20 a depth distance z away from system 10. Typically the optical energy is light, for example emitted by a laser diode or LED device 120. Some of the emitted optical energy will be reflected off the surface of target object 20 as reflected energy Sin. This reflected energy passes through an aperture field stop and lens, collectively 135, and will fall upon two-dimensional array 130 of pixel detectors 140 where a depth or Z image is formed. In some implementations, each imaging pixel detector 140 captures time-of-flight (TOF) required for optical energy transmitted by emitter 120 to reach target object 20 and be reflected back for detection by two-dimensional sensor array 130. Using this TOF information, distances Z can be determined as part of the DATA signal that can be output elsewhere, as needed. Typically ambient light, e.g., sunlight, will also enter lens 135 and, unless appropriately handled by TOF system 10, can affect the detection performance of sensor array 130.
Emitted optical energy S1 traversing to more distant surface regions of target object 20, e.g., Z3, before being reflected back toward system 10 will define a longer time-of-flight than radiation falling upon and being reflected from a nearer surface portion of the target object (or a closer target object), e.g., at distance Z1. For example the time-of-flight for optical energy to traverse the roundtrip path noted at t1 is given by t1=2·Z1/C, where C is velocity of light. TOF sensor system 10 can acquire three-dimensional images of a target object in real time, simultaneously acquiring both luminosity data (e.g., signal brightness amplitude) and true TOF distance (Z) measurements of a target object or scene. Most of the Z pixel detectors in Canesta-type TOF systems the property that each individual pixel acquires vector data in the form of luminosity information and also in the form of Z distance information.
A more modern class of TOF sensor systems are so-called phase-sensing TOF systems. Most current Canesta, Inc. phase-sensing or phase-type TOF systems includes Z-pixel detectors that acquires vector data in the form of luminosity and depth or z distance information. Such TOF systems construct a depth image (DATA′) by examining relative phase shift between the transmitted light signals Sout having a known phase, and acquired signals Sin reflected from the target object. Exemplary such phase-type TOF systems are described in several U.S. patents assigned to Canesta, Inc., assignee herein, including U.S. Pat. No. 6,515,740 “Methods for CMOS-Compatible Three-Dimensional Imaging Sensing Using Quantum Efficiency Modulation”, U.S. Pat. No. 6,906,793 entitled “Methods and Devices for Charge Management for Three Dimensional Sensing, U.S. Pat. No. 6,678,039 “Method and System to Enhance Dynamic Range Conversion Useable With CMOS Three-Dimensional Imaging”, U.S. Pat. No. 6,587,186 “CMOS-Compatible Three-Dimensional Image Sensing Using Reduced Peak Energy”, U.S. Pat. No. 6,580,496 “Systems for CMOS-Compatible Three-Dimensional Image Sensing Using Quantum Efficiency Modulation”. Exemplary detector structures useful for TOF systems are described in U.S. Pat. No. 7,352,454 entitled “Methods and Devices for Improved Charge Management for Three-Dimensional and Sensing”.
FIG. 2A is based upon above-noted U.S. Pat. No. 6,906,793 and depicts an exemplary phase-type TOF system in which phase shift between emitted and detected signals, respectively, Sout and Sin provides a measure of distance Z to target object 20. Under control of microprocessor 160, active optical energy source 120 is periodically energized by an exciter 115, and emits output modulated optical energy S1=Sout=cos(ωt) having a known phase towards object target 20. Emitter 120 preferably is at least one LED or laser diode(s) emitting low power (e.g., perhaps 1 W) periodic waveform, producing optical energy emissions of known frequency (perhaps a few dozen MHz) for a time period known as the shutter time (perhaps 10 ms).
Some of the emitted optical energy (denoted Sout) will be reflected (denoted S2=Sin) off the surface of target object 20, and will pass through aperture field stop and lens, collectively 135, and will fall upon two-dimensional array 130 of pixel or photodetectors 140. When reflected optical energy Sin impinges upon photodetectors 140 in pixel array 130, charges within the photodetectors are released, and converted into tiny amounts of detection current. For ease of explanation, incoming optical energy may be modeled as Sin=A·cos(ω·t+θ), where A is a brightness or intensity coefficient, ω·t represents the periodic modulation frequency, and θ is phase shift. As distance Z changes, phase shift θ changes, and FIGS. 2B and 2C depict a phase shift θ between emitted and detected signals, S1, S2. The phase shift θ data can be processed to yield desired Z depth information. Within array 130, pixel detection current can be integrated to accumulate a meaningful detection signal, used to form a depth image. In this fashion, TOF system 100 can capture and provide Z depth information at each pixel detector 140 in sensor array 130 for each frame of acquired data.
As with the embodiment of FIG. 1, unless adequately handled, ambient light can affect detection by array 130, and thus overall performance of TOF system 100. Canesta, Inc. has received several U.S. patents directed to use of common mode resets (CMR), charge dumps (DUMP), and other techniques to cope with mal-effects of ambient light. These patents include U.S. Pat. Nos. 7,321,111 and 7,157,685 entitled “Method and System to Enhance Differential Dynamic Range and Signal/Noise in CMOS Range Finding Systems Using Differential Sensors”, and U.S. Pat. No. 7,176,438 entitled “Method and System to Differentially Enhance Sensor Dynamic Range Using Enhanced Common Mode Reset”.
In typical operation, pixel detection information is captured at least two discrete phases, preferably 0° and 90°, and is processed to yield Z data. System 100 yields a phase shift θ at distance Z due to time-of-flight given by:θ=2·ω·Z/C=2=(2·π·f)·Z/C  (1)
where C is the speed of light, 300,000 Km/sec. From equation (1) above it follows that distance Z is given by:Z=θ·C/2·ω=θ·C/(2·2·f·π)  (2)
Note when θ=2·π, there will be an aliasing interval range associated with modulation frequency f given by:ZAIR=C/(2·f)  (3)
In practice, changes in Z produce change in phase shift θ although eventually the phase shift begins to repeat, e.g., θ=θ+2·π, etc. Thus, distance Z is known modulo 2·π·C/2·ω)=C/2·f, where f is the modulation frequency.
The performance of an overall TOF system 100 can change as a function of many parameters. For example system performance increases with increasing light source power emitted by source 120. In practice, determining proper characteristics for optics, e.g., 125, 135, and for light source power from source 120 needed to meet performance level specification of TOF system 100 for an application can be a tedious and error prone task. But such analysis is required to know whether a design for a three-dimensional system can meet specification within cost budget for a particular application. Later when the TOF system is designed for mass production, the analysis task must be repeated but with greater accuracy to try to account for all possible optimizations, to realize best system performance at lowest cost.
System analysis tools that can compute the depth data quality of a TOF system such as system 10 in FIG. 1, or system 100 in FIG. 2A are typically produced by the vendor of the IC chip, e.g., IC 110 upon which much of the system is fabricated. As to the source of emitted optical energy, such tools seek to take into consideration light source electrical characteristics including power, wavelength, and rise/fall times of laser 120, as well as optical characteristics including diffuser and beam shaper characteristics. Further such tools seek to take into account sensor electrical characteristic including modulation frequency, flicker and Shot noise, as well as optical characteristics including f/stop of the imaging lenses. In all, such analysis tools attempt to produce an estimate of the quality of the depth data (DATA′) generated by TOF system 100, preferably on a per pixel per frame accuracy and uncertainty basis. There is a need to enhance quality and accuracy of such prior art tools, which typically involves computing closed form expressions for total signal and noise. Simply stated, computing system signal/noise for all but trivial sequences is generally beyond the realm of a closed form expression. Prior art tools can no longer be relied upon to accurately predict TOF system performance in terms of computing the depth data quality without substantial error. As noted, inaccurate predictions that are too pessimistic can result in system over-design, and design predictions that are unduly optimistic can result in system under-design.
In short, many systems, including imaging systems and depth systems, especially TOF systems and sensor arrays, have evolved beyond what the relatively simple prior art analytical tools can handle. Modern TOF systems have complex operating sequences and operations, whose complexity precludes readily accurately computing noise per sequence in a closed form analytical equation expression.
What is needed then for use with systems, especially complex systems that may include imaging systems, depth systems and especially TOF systems, is an analytical tool that can receive as input data parameters including emitted optical energy and its angular distribution, desired signal/noise, sensor characteristics, TOF imaging optics, possible target object z distances and locations, and magnitude of ambient light. Such tool should provide a method to automate the analysis task to quickly ascertain feasibility of a given design to a desired application. When used with depth systems such as TOF systems, the analytical tool should compute quality of the depth data output by the TOF system, preferably characterized in terms of number of pixels in the systems fields of view, and depth accuracy and jitter uncertainty at each pixel or collection of pixels. Such tool should ensure adequate calculation accuracy to optimize the TOF system before it goes into mass production. Such tool should perform with TOF systems whose sequence of operations and sensor operations is flexibly programmable. Preferably TOF system signal and noise should be combined while taking into account sequence nuances as they pertain to TOF system accumulated signal and noise. Such analytical or computational tool should be simple to use yet provide higher accuracy in estimating TOF system performance than can be achieved by prior art methods.
The present invention provides such an analytical tool and method for use with modern TOF systems that include complex operating modes.