Before the advent of digital image sensors, photography, for the most part of its history, used film to record light information. At the heart of every photographic film are a large number of light-sensitive grains of silver-halide crystals. During exposure, each micron-sized grain has a binary fate: Either it is struck by some incident photons and becomes “exposed”, or it is missed by the photon bombardment and remains “unexposed”. In the subsequent film development process, exposed grains, due to their altered chemical properties, are converted to silver metal, contributing to opaque spots on the film; unexposed grains are washed away in a chemical bath, leaving behind them transparent regions on the film. Thus, in essence, photographic film is a binary imaging medium, using local densities of opaque silver grains to encode the original light intensity information. Thanks to the small size and large number of these grains, one hardly notices this quantized nature of film when viewing it at a distance, observing only a continuous gray tone.
In a binary pixel image sensor that is reminiscent of photographic film, each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity. At the start of the exposure period, all pixels are set to 0. A pixel is then set to 1 if the number of photons reaching it during the exposure is at least equal to a given threshold q. One way to build such binary sensors is to modify standard memory chip technology, where each bit is designed to be sensitive to visible light.
With current CMOS technology, the level of integration of such systems can exceed 109˜1010 (i.e., 1 giga to 10 giga) pixels per chip. In this case, the corresponding pixel sizes are far below the fundamental diffraction limit of light (see Section II for more details), and thus the image sensor is oversampling the optical resolution of the light. Intuitively, one can exploit this spatial redundancy to compensate for the information loss due to one-bit quantizations, as is classic in oversampled analog-to-digital conversions.
Building a gigapixel binary sensor that emulates the photographic film process was originally motivated by technical necessity. The miniaturization of camera systems calls for the continuous shrinking of pixel sizes. At a certain point, however, the limited full-well capacity (i.e., the maximum photon-electrons a pixel can hold) of small pixels becomes a bottleneck, yielding very low signal-to-noise ratios (SNRs) and poor dynamic ranges. In contrast, a binary sensor whose pixels only need to detect a few photon-electrons around a small threshold q has much less requirement for full-well capacities, allowing pixel sizes to shrink further. Numerical simulations indicate that the binary sensor can have higher dynamic ranges than conventional image sensors.