The discipline of high-speed imaging has found many applications in medicine, science, engineering, and other fields. The advent of electronic image sensors such as focal plane arrays (FPAs) based on silicon photodiode sensors has extended the usefulness of this discipline by extending the available time resolutions far into the sub-microsecond regime.
A typical focal plane array (FPA) using CMOS technology is a hybrid assembly in which a wafer containing a photodiode array is bonded to a CMOS read-out integrated circuit (ROIC). The photodiode array typically contains tens of thousands to hundreds of thousands of individual photodiodes, each of which communicates with a respective pixel on the ROIC. In operation, each photodiode converts the energy of incident electromagnetic radiation or subatomic particles to electron-hole pairs. The electron-hole pairs, in turn, produce electric current that is sampled, integrated over a specified integration time in the underlying pixel of the ROIC, and stored on a hold capacitor as collected charge that is proportional to an output voltage.
The pixels are arranged in rows and columns. The pixel values are read out from the ROIC in a sequential manner so as to form one or more serial streams of data. In a row-wise approach, for example, each row of pixels is selected in sequence. The selected row is connected to an array of column amplifiers by column buses that are shared by all of the pixels in a given column. A control circuit sweeps the selected row, column-by-column, to assemble the amplified pixel outputs into a serial bitstream. The serial bitstream is passed downstream for analog-to-digital conversion (ADC) and further processing.
FPAs as described above can be used in streak cameras, which produce one-dimensional images in which the second dimension represents time, and in which extremely fine time-resolution can be achieved. FPAs can also be used in framing cameras. A framing camera produces two-dimensional images. Some framing cameras are able to produce a sequence of two or more images corresponding to respective time-resolved frames that are separated by an interframe interval of less than one microsecond.
One shuttering method used by framing cameras is rolling shuttering. In the rolling shutter method, the exposures of two adjacent frames (referred to for convenience as Frame 1 and Frame 2) can overlap. That is, each row begins its Frame-2 exposure as soon as it has completed its Frame-1 readout, without waiting for the rows behind it to complete their own Frame-1 readouts. This method provides very fast read-out, but valuable information may be lost when imaging an object that is changing rapidly.
Another shuttering method is global shuttering. In the global shutter method, all pixels are exposed simultaneously. This method provides highly time-resolved image information. However, the frame rate is limited by the speed with which the pixels can be read out and digitized. The greater the number of pixels, the lower the maximum achievable frame rate. As a consequence, there is a tradeoff between spatial resolution and the frame-to-frame time resolution that can be achieved.
For the study of shock-wave phenomena and other processes in rapidly evolving physical systems, the ability to perform sequential imaging with high spatial resolution at interframe times of tens of nanoseconds, or even less, would be highly advantageous. For this reason, among others, there is a need for improved camera designs that achieve more favorable tradeoffs between spatial and temporal resolution than those that are currently applicable.