The present invention, in some embodiments thereof, relates to imaging and, more particularly, but not exclusively, to spectral imaging using an interferometer.
Images are often measured with a digital camera using either a charged coupled device (CCD) or the complementary metal-oxide semiconductor (CMOS) technologies. These detectors have a 2D array that is sensitive to light. The array is divided to small elements, pixels, and by using fore-optics, the image of the measured sample is focused on the 2D array according to a certain magnification factor. During the measurement, each pixel collects electric charge in a quantity that is proportional to the light that origin from a small part of the sample, and at the end of the measurement the charge is converted to a number. By presenting all these numbers on a monitor, the image can be observed, and the information can be stored and used for further processing.
The spatial resolution of an image can be controlled by the magnification of the fore-optics. Commercially available CCD or CMOS cameras vary in their number of pixels, size, sensitivity to light, speed of operation and other parameters. As an example, Andor (Belfast, UK) has an advanced high-end electron-multiplication CCD camera (EMCCD) that has a mechanism for improving the measured signal with respect to the so called analog-to-digital noise levels (for example, iXon). The actual sensitive array in this camera is cooled to a temperature of −1000 C so that the dark-noise created in the pixels themselves is reduced. The camera has a high quantum efficiency, and allows detecting more than 90% of the light in a large part of the spectrum. As another example, Balser (Ahrensburg, Germany) has a CCD camera with a more modest performance, but its price is much cheaper relative to the previously mentioned CCD.
The spatial resolution of an image captured by an imaging system having an imager and a magnifying element is known to depend on the number of pixels of the imager and the magnification of the magnifying element.
For example, a CCD with 1000×1000 pixels, each having dimensions of 10×10 μm2, has overall dimensions of 1×1 cm2. When such a CCD is used to image a sample with these same dimensions with a magnification fore-optics of 1, the entire sample can be imaged by the CCD, and the spatial resolution of the captured image is 10 micrometers. When such a CCD is used to image a sample with these same dimensions with a magnification fore-optics of, say, ×10, a resolution of 1 micrometer can be obtained. However, the improved resolution is traded with a reduced field-of-view since with a ×10 magnification, only 1×1 mm2 of the sample is captured by this CCD. In order to increase the field-of-view, a scanning technique is typically employed, wherein the sample or/and the imaging system are moved one with respect to the other. In the above numerical example, at least 100 images are required to image a 1×1 cm2 sample. Scanning systems known in the art include a microscope-based system marketed by Applied Spectral Imaging under the trade name GenASis Scan & Analysis™. This system has a scanning mechanism for measuring different samples on a microscope slide for optical microscopes.
It is recognized that the planar spatial resolution d of an optical microscope for a given wavelength λ, is bounded by the Abbe diffraction limit and is approximated as given by d=0.61λ/NA, where NA is the numerical aperture of the objective lens.
Light is a radiating electric field that can be characterized by its spectrum. The spectrum describes the intensity I at each wavelength and can be expressed as a function I(λ). In a gray-level image, the intensity at each pixel of the imager is the integrated intensity of the spectrum that impinges on the pixel, which can be approximated as:I=∫I(λ)dλ  (1)where the integral is performed in the whole spectral range that the optical system can measure. EQ. 1 is an approximation, because the spectral response of the optical system is typically non-uniform across the spectral range. The resulting gray-scale image includes a set of values (e.g., intensity levels), one value for each pixel of the imager.
In color CCD or CMOS cameras, the two-dimensional imager is divided to sub-arrays of 4-pixels arranged in a 2×2 square. Each of the 4 pixels is coated with a filter that transmits only part of the spectral range. Typically, one of the pixels transmits the blue spectral range, the second one only red and the other two only green. Commercially available color cameras contain up to about 1000×1000 sub-arrays of 4 pixels.
In many cases, RGB color information is not insufficient, and a higher spectral resolution is desired. Known in the art is a technique termed “spectral imaging.” In this technique, a spectrum is measured for each pixel of the imager. The resulting dataset is to three-dimensional (3D) in which two dimensions are parallel to the imager plane and the third dimension is the wavelength. Such dataset is known as a “spectral image” which can be written as I(x,y,λ), where x and y are the position in the imager plane, λ the wavelength, and I is the intensity at each point and wavelength.
Several spectral imaging techniques are known [Y. Garini, N. Katzir, D. Cabib, R. A. Buckwald, D. G. Soenksen, and Z. Malik, Spectral Bio-Imaging, in Fluorescence Imaging Spectroscopy and Microscopy, X. F. Wang and B. Herman, Editors. 1996, John Wiley and Sons: New York. p. 87-124; Y. Garini, I. T. Young, and G. McNamara, Spectral imaging: principles and applications. Cytometry, 69A, 735-747 (2006)]. In some systems, a set of filters is mounted on a filter wheel with a mechanism that allows capturing a set of images, each time thorough a different filter. In other systems, a spectral image is formed by means of a Fourier transform, as disclosed, for example, in U.S. Pat. No. 5,539,517, the contents of which are hereby incorporated by reference.