Solid state image sensors produce a digital representation of a visual image by means of converting incident photons into electronic charges that are accumulated by discrete potential wells or photodiodes (pixels) setup by circuitry in the form of an array spanning the image sensor's photosensitive region. Electronic charges collected by each pixel are transferred to read-out electronics, digitized, and stored as integral intensity values that are proportional to the number of incident photons thus producing a digital representation of the visual image. Solid state image sensors, such as a charged coupled device (CCD), complementary metal oxide sensors (CMOS), and electron-multiplication CCD (EM-CCD) sensors are widely used in the field of spectroscopy, scientific imaging, and astrophotography. In each of these fields, the ability to record an image unaffected by the detector's response to light is ultimately desired. Any contribution of the detector's response to the recorded image must be accounted for and subtracted from the raw image for scientific results to be considered completely correct.
In the design of all solid state image sensors, the electronic circuitry and polycrystalline silicon (polysilicon) gates comprising the device's pixels are formed on one side of a Si wafer (substrate), referred to as the frontside; opposite is the backside. Conventional illumination of a solid state image sensor has light incident on the frontside where polysilicon gates preclude achievement of high QE due to the obscuration of the photosensitive Si epitaxial layer beneath. Thinning the backside of the substrate down to the order of tens of micrometers (μm) allows for unobscured backside illumination of each pixel and therefore achievement of higher QE.
However, backside illumination produces problems not present in traditional front-illuminated sensors. When incident photons enter the back-thinned Si epitaxial layer, they are absorbed at a depth that exponentially increases with respect to wavelength. For example, at 400 nm, a photon is most likely absorbed after propagating 0.2 μm into Si, whereas at 1100 nm a photon will travel 582 μm before likely being absorbed. A back thinned sensor having too thin a Si epitaxial layer results in poor near infrared (NIR: 750-1100 nm) response while too thick a layer results in poor ultraviolet-visible (UV-VIS: 300-750 nm) image resolution via a reduction in modulation transfer function and higher dark current. Typical Si epitaxial layer thicknesses for back-illuminated, back-thinned imaging sensors are typically on the order of 10-50 μm.
A Si epitaxial layer thickness on the order of 10-50 μm, is relatively transparent to light having a wavelength greater than 600 nm. The backside, incident surface of illumination, of the sensor and the underlying polysilicon gate structures form two planar optical surfaces where internally reflected light rays may interfere constructively or destructively to produce interference or fringe patterns that modulate the sensitivity of the detector. In the art, this effect is referred to as fringing or etaloning. Etaloning is difficult to predict in advance of any scientific measurement and presents a problem that must be overcome when using back-illuminated imaging sensors.
In scientific imaging or astrophotography applications, the problem of etaloning may be overcome by acquiring quasi-monochromatic reference images and through software techniques the interference pattern may be subtracted. For spectroscopic applications, incident light is spectrally dispersed and necessarily changing in wavelength with respect to lateral (horizontally from left to right) position on the image sensor, which causes the fringe density and therefore the modulation in detector responsivity to change as a function of wavelength. This problem is further complicated for spectroscopic applications because nearly all optical spectrometers have a mechanically driven diffraction grating which allows the user to change the region of the electromagnetic spectrum in question thus changing entirely the observed modulation in detector sensitivity.
It is known to further improve the QE of a back-illuminated image sensor by reducing the reflectivity of the photosensitive Si backside surface by depositing an anti-reflection (AR) coating atop of said backside surface. Traditionally, a single layer AR coating, for example hafnium oxide (HfO2), having an optical path difference (OPD) equal to ¼ the wavelength of light in which a minimum in backside surface reflectivity is desired, is deposited via physical vapor deposition techniques onto the backside surface of the image sensor. Single layer AR coatings of this type will produce approximately a ‘U’ shaped reflectivity curve and therefore can only enhance the image sensor's QE over a limited wavelength range. Away from the minimum in reflectivity, the QE of the image sensor will be reduced by an amount proportional to the light loss due to reflection at the backside surface.
It is also known that the effect of reducing reflectivity at the backside surface of an image sensor will also reduce the amplitude of interference fringes produced via etaloning. By varying the AR coating's layer thickness, the minimum in reflectivity may be shifted longward in wavelength into the NIR thereby reducing the interference about the AR coating's reflectivity minimum. However, the QE of the image sensor will suffer greatly in the UV-VIS portion of the spectrum due to increased backside surface reflectivity.
In two separate publications, U.S. Pat. No. 5,271,614 and Kelt A. et al., Optimised CCD Antireflection Coating Graded Thickness AR (for Fixed-Format Spectroscopy), in Scientific Detectors for Astronomy, pp. 369-374 (2005), an AR coating having a layer thickness that continuously varies laterally across the image sensor is produced. The OPD of the deposited layer is controlled during deposition to roughly equal to one quarter of the wavelength of light anticipated to strike that respective lateral position of the image sensor. In this application, the image sensor is used to image a spectrally dispersed object, such as the entrance slits to a spectrometer or spectrophotometer. This technique will produce an image sensor having high QE over a wide spectra range and exhibit minimal etaloning, however, this solution functions only when the light incident on the image sensor is spectrally dispersed in exact accordance to the variation in AR coating thickness. As a result, this type image sensor is only useful for a single type spectrometer or spectrophotometer having fixed spectral dispersion characteristics as defined by the image sensor's graded AR coating.
Other references in the general field include U.S. Pat. Nos. 6,025,585; 7,196,314 and 7,750,280.
In view of the above, there is a need for a back-illuminated image sensor having uncompromised NIR imaging with high QE over a broad spectral range and, more particularly, for a back-illuminated solid state imaging sensor employing an anti-reflection structure to reduce reflectivity at the photosensitive backside surface that both mitigates interference effects for light in the NIR (etaloning) and produces high device QE over a broad spectral range (UV-NIR).