Holography has found a place in various domains of technology where waves scattered from an object are used to image the object. Holography is a form of coherent imaging in which the phase of a scattered wave conveys information about the scattering object, and it does so by interference between a wave scattered by the object and a wave that is either derived from the same source used to illuminate the object or bears a known (typically fixed) phase relationship to the illuminating wave.
Examples of coherent imaging are known in acoustics, in radar, as well as in optical holography. Interference of a scattered wave with the illuminating wave itself gives rise to fringes in the place of a detector, while an example of post-detection interference of a detected scattered wave with a non-radiated signal derived from the illuminating source is U.S. Pat. No. 6,825,674 (to Smith). The phase relation between the illuminating source and the signal used as a phase reference may be fixed, or may be modulated, as in U.S. Pat. No. 3,856,986 (to Macovski), for example, and the source, reference, or detector may be spatially swept, as in U.S. Pat. No. 3,889,226 (to Hildebrand), or U.S. Pat. No. 3,640,598 (to Neeley), for example. Processing of synthetic aperture radar (SAR) data with a coherent optical processor (where processing is performed in the visible portion of the electromagnetic spectrum) is termed a ‘quasi-holographic’ system, and may be used to yield reconstructed radar images, as described by Pasmurov et al., Radar Imaging and Holography, (The Institution of Engineering and Technology, London, 2009, hereinafter, Pasmurov)), which is incorporated herein by reference.
In all of the foregoing applications, the illuminating beam is diffracted from an aperture defined by the source of illuminating radiation and impinges upon the scattering object with a wavefront characterized by a finite radius of curvature. Moreover, in all of the foregoing applications, illumination of the scattering object is either monochromatic or quasi-monochromatic, which is the say that the range of wavelengths impinging on the scattering object is contiguous in frequency, and is narrow in comparison with the central wavelength, where “narrow” typically connotes a bandwidth-to-wavelength ratio of less than 5%.
In the various fields of optical microscopy, including, for example, optical scanning microscopy, or confocal microscopy (with additional modalities discussed below), the phase of light scattered by, or transmitted through, a specimen may carry important information. However, under ordinary circumstances, phase information is lost in the recording process because optical detectors record only the squared amplitude of the field and not the phase. While it is possible to determine the phase interferometrically at each pixel of the image separately, this requires the direct or indirect acquisition of multiple data points at each pixel, as is obvious from simple information-theoretic considerations, since amplitude and phase components must be distinguished. That is, at the least, the reference field must be varied to obtain data at three delays to as to determine the amplitude of the unknown field, the phase of the unknown field and the amplitude of the reference. In practice, noise in the measurements makes this minimalistic scheme unstable and many data must be obtained to perform a stable, nonlinear fit to a sinusoid. This typically requires the acquisition of tens to a few hundred data point distributed over a cycle of the interferogram. The result of this requirement is an acquisition time multiplied by a large factor, and substantially increased apparatus complexity.
As an alternative to point-by-point interferometry, holography records the optical amplitude and phase over an entire image by introduction of a reference wave. The interference between the optical field and the reference wave creates a specific interference pattern that encodes the optical amplitude and phase. The fact that the whole field is recorded gives rise to the illusion of a three-dimensional scene, the aspect of holography best known in popular culture.
Methods of optical scanning microscopy, in particular, have had tremendous impact in almost all fields of science and technology, especially in biology. While optical phase measurements provide an added level of insight, they have come at the cost of increased complexity, longer measurement times, and smaller images. In wide-field methods, holographic measurements are fast and relatively easy. The power of holography is that it encodes the amplitude and phase of the optical field at the measurement plane over the entire image and generally does not require the acquisition of any more data than are acquired in the nonholographic version of the same modality. Holography has proven difficult to implement in far-field scanning methods and is seemingly impossible in near-field and super-resolved imaging for reasons now described in detail.
A generic, prior art holography apparatus is shown in FIG. 1A in which a field Us(r), scattered by a complex object 10 (the amplitude and phase of which are represented in FIGS. 1B and 1C, respectively), is imaged onto a detector plane 12 and superimposed with a known reference field, Ur(r) yielding an image-plane hologram, I(r), where r is a vector denoting spatial position. Holographic techniques are described generally by Hariharan, Basics of Holography, §1.6, (Cambridge UP, 2002), hereinafter, “Hariharan (2002),” which is incorporated herein by reference. The intensity at the detector plane is given byI(r)=|Ur(r)|2+|Us(r)|2+Us*(r)+Ur(r)+Ur*(r)Us(r).  (1)
In the case that the reference field is a plane wave and in the plane of detection is of the form Ur=Arei k∥·r, where Ar is the amplitude of the reference field and k∥ is the component of the reference wavevector k parallel to the detector plane, the amplitude and phase of the field may be determined by Fourier transform (FT) of the hologram. The FT of Eq. (1) is given by{tilde over (I)}(q)=|Ar|2δ(q)+C(q)+Ar(k∥+q)+Ar*(k∥+q),  (2)where the tilde indicates FT with respect to position, δ is the Dirac delta function, and C is the autocorrelation of . Of the four terms, the first term is a constant, proportional to the square of the reference amplitude, and may be subtracted or ignored. The second, autocorrelation, term is the square of the scattered field and is typically considered negligible with respect to the other terms and may be made so by increasing the amplitude of the reference field. The so-called direct term, Ar*, and conjugate term, Ar, proportional, respectively, to the scattered field and its conjugate, are shifted by −k∥ and k∥, i.e., they are shifted by 2|k∥| in the FT plane. (The conjugate of a complex quantity has an imaginary component of equal magnitude and opposite sign.)
In the case that the scattered field is spatially bandlimited to the Ewald circle of reflection (as discussed in Born & Wolf, Principles of Optics, (Cambridge UP, 7th ed., 1999), see pp. 699-703), the direct and conjugate images do not overlap and may be obtained separately by simply filtering the Fourier transform with a filter matched to the spatial bandwidth of the field and centered at q=−k∥. More precisely, assuming Us is band limited and that D(q)=1 for q in the support of Ũs (where Ũs is non-zero) and D(q)=0 otherwise, then, if D(k∥+q)D(k∥−q)=0, and assuming the autocorrelation term is negligible, (k∥+q)=AD(k∥+q)Ĩ(q)/|A|2. Either direct or conjugate image, once filtered, may be referred to herein as an “isolated crossterm.” Thus, off-axis holography provides a means to determine the complex field, both phase and amplitude, from a single image so long as the field is band-limited.
In far-field holography (where fields emanating from a point are effectively spherical waves, i.e., where points of constant phase lie on a sphere), the physics of wave propagation guarantees that the scattered field is spatially bandlimited to the Ewald circle of reflection, which is to say that the support of Ũs is contained within a circle of radius 2π/λ. The component of the wavevector of the reference wave in the detector plane may likewise be as large as 2π/λ. Thus, the direct and conjugate terms may always be determined separately by off-axis holography with sufficiently oblique illumination.
FIG. 1D is a calculated hologram derived using the diffraction-limited optical setup of FIG. 1A, while, in the FT of FIG. 1D shown in FIG. 1E, the direct 14 and conjugate 15 terms have separated into the upper and lower halves of the FT plane and do not appreciably overlap because k∥ is of a magnitude larger than the spatial bandwidth (the width of the Fourier distribution of the field). In a diffraction-limited imaging system, the reference wavevector may always be made sufficiently large (to separate the direct and conjugate terms) by oblique incidence of the reference wave, as taught by Leith et al., Reconstructed Wavefronts and Communication Theory, J. Opt. Soc. Am., vol. 52, pp. 1123-30 (1962), incorporated herein by reference. By applying a window, indicated by the dashed box in FIG. 1E, and then shifting the result by k∥ back to the center of the FT plane and taking the inverse FT, the original field amplitude (FIG. 1F) and phase (FIG. 1G) may be obtained, scaled by the reference conjugate amplitude Ar*. A corresponding technique, taught in Pasmurov (2009), at p. 39, may be implemented for synthetic radar imaging.
As has long been known, the physics of waves (the fact that propagating fields are spatially bandlimited) imposes limits on resolution in standard far-field imaging systems, and these resolution limits are indeed manifested in the low-resolution reconstruction of the phase and amplitude in FIGS. 1F and 1G. In order to image at scales significantly smaller than the wavelength, a number of superresolution techniques have been developed, such as those described by Keilmann (2004), and others. In order to image at scales significantly below the wavelength of light used, it is necessary probe the nonpropagating, superoscillatory evanescent fields generated at sources and scatterers with high spatial frequency components. These evanescent waves decay exponentially away from the source, so it is necessary to interact with these fields close to the sample. This is the basic idea in scanning near-field optical microscopy (SNOM). A particularly successful variant of SNOM is the scattering-type SNOM (s-SNOM). s-SNOM is described, for example, by Keilmann et al., Near-Field Microscopy by Elastic Light Scattering from a Tip, Phil. Trans. R. Soc. Lond. A, vol. 362, pp. 787-805 (2004), which is incorporated herein by reference. A sharp local probe is placed near or at the surface of a sample, very often an atomic force microscope (AFM) probe. The probe-sample system is illuminated from the far-field. The local probe produces a strong local field and acts to couple out the local scattered fields, thus probing the high spatial-frequency near-field. The probe is scanned over the surface of the sample and the scattered field intensity is recorded as a function of tip position. The resolution in s-SNOM is effectively independent of wavelength and depends only on the tip sharpness.
However, a marriage of holographic phase imaging and superresolved imaging has remained seemingly impossible because of the fundamental issue illustrated in FIGS. 1H-1K, where the situation is simulated in which the diffraction-limited imaging system of FIG. 1A has been replaced by an unspecified superresolved imaging system with a resolution of λ/100, where λ is the wavelength. The reference wave Ur(r) remains the same as used in FIG. 1D-1G, such that the interference fringes (in FIG. 1H) are necessarily much larger than the smallest features of the sample (shown in FIG. 1B) that are being imaged. As a result, the direct and conjugate terms overlap as shown in the FT in FIG. 1I, and the reconstruction of both the amplitude (FIG. 1J) and phase (FIG. 1K) of the field suffers from serious artifacts.
In order to separate the direct and conjugate terms, a much larger reference wavevector would be needed. A superresolved imagining system working at a wavelength of 10 microns with 10-nm resolution would require a k∥ of size three orders of magnitude greater than the free-space wavenumber. No practicable mechanism exists to physically generate such a reference field. Superoscillatory reference waves with k∥ a few times larger than the free-space wavenumber have been tried (as by Bozhevolnyi et al., Near-field Optical Holography, Phys. Rev. Lett., vol. 77, pp. 3351-54 (1996), incorporated herein by reference). However, superoscillatory reference waves that are only several times larger than the free-space wavenumber of the imaging beam do not suffice, and at least two coregistered, nondegenerate holograms are needed in order to independently determine phase and amplitude, as shown by Carney et al., A computational lens for the near-field, Phys. Rev. Lett., vol. 92, 163903 (2004), incorporated herein by reference. Indeed, confocal scanning holography, as described by Jacquemin et al., A low-error reconstruction method for confocal holography to determine 3-dimensional properties, Ultramicroscopy, vol. 117, pp. 24-30 (2012) has proven elusive.
To enable phase imaging with a single, superresolved hologram, reference fields with wavevectors hundreds, or even thousands, of times the size of the free-space wavenumber, are required. Such fields will be referred to herein as “hyperoscillatory waves.” In order to achieve true supperresolution holography, it would thus be desirable to provide an effectively hyperoscillatory reference wave.