The present invention relates to a technique for measuring time intervals with very high resolution and, more particularly, to apparatus and method for measuring time intervals with very high resolution using a time stretcher with a a time digitizer.
High resolution time measurements are used, for example, in particle physics experiments to measure particle velocity. When the velocity and momentum (measured by bending in a magnetic field) of a particle is known, the mass of that particle can be calculated. The velocity in such experiments is calculated from a time of flight measurement over a known flight path, wherein the time resolution should be better than the resolution of the particle detector, which generally has a maximum resolution of 100 picoseconds. Unfortunately, the time ranges in such particle physics experiments generally are less than 100 nanoseconds.
Other fields of instrumentation in which high resolution measurements are taken include high speed digital sampling oscilloscopes, e.g., the random interleaved sampling type, which may measure the time interval between a trigger pulse and a sampling clock to a resolution that is better than the equivalent sampling period, which may be only a few picoseconds. Other instrumentation includes a laser ranging instrument (LIDAR) which often requires a resolution of less than 100 picoseconds corresponding to, for example, a distance of 16 mm, over a range of several microseconds.
One known technique for measuring time intervals is to electronically count a clock down to resolutions of order 1 nanosecond, but the resolution is limited by the clock frequency and the ability to correctly count the clock. For a resolution of 1 nanosecond, a 1 GHz clock is required.
Another technique for measuring time intervals employs the digital interpolation of a slower clock, such technique achieving an RMS resolution that is better than 300 picoseconds using a 250 MHz clock. A further technique involves converting the time interval to a voltage or charge and then measuring that voltage or charge with an analog to digital (ADC) converter. A time to amplitude converter (TAC) followed by an ADC suitably measures time in this manner.
Still another technique for measuring time intervals with high resolution is to use a time amplitude converter followed by a Wilkinson type ADC, which converts the amplitude into a time and measures the time by directly counting a clock with a scaler. By utilizing a time stretcher, the time interval to be measured is converted into a proportionally longer time interval and the longer time interval is then measured, such measurement being reduced in scale by the amount of "stretch" of the time stretcher so as to produce the time interval between two events.
In the above-discussed time measuring techniques, the maximum time range that can be measured is a function of the resolution of the ADC and the desired time resolution. For example, given a 12 bit ADC and a resolution of 25 picoseconds, the maximum time interval that may be measured is only 100 nanoseconds.
One problem with the above-discussed time measuring techniques is that such techniques and methods are "common start" or trigger methods in which the circuits are armed with a start signal, which enables the stop input. While measuring the time interval between the start signal and stop input after it is armed is sufficient for many laboratory measurements in which the signals to be measured may be arranged as required, such common start measurement is unsuitable to satisfy the time measuring requirements of time of flight measurements in particle physics experiments. In such experiments, there are usually many particles passing through particle detectors and only selected particles are to be measured. Since there are many extraneous signals, a trigger decision is required, which involves providing a trigger signal after the event has occurred. This may be achieved by delaying the signal to be measured by a relatively long amount of time so as to allow for trigger formation and distribution, but such a signal delay requires a relatively long, high bandwidth and expensive delay line in each signal channel.
Another problem with the above-discussed time measuring techniques is that they record only a single hit. Since multiple hits on a channel are not easily accommodated, the system (i.e., the time measuring device) must be reset after each measurement.
A further problem encountered in typical time measuring devices is that they are generally difficult to calibrate. Offset calibration and gain calibration (e.g., the time stretching ratio) must be carried out for each measuring channel, wherein the offset must always be calibrated by the user since the relative signal paths from the signal sources to the measuring instrument must be included in the offset. However, the scale factor (i.e., the gain) is not easily measured and usually requires creating test pulses with an accurately known variable time delay. Since such calibration requires a special test setup, it is usually performed at a service bench. Further, calibration of the integral linearity requires multiple measurements over the entire range of the instrument. Unfortunately, calibrating the time measuring device at a service bench requires that it be quite stable over time and with respect to its environment in order for the calibration to remain valid.