(1) Technical Field
The present invention relates generally to optical systems and more specifically to methods, systems, and program products for the detection and correction of spherical aberration in microscopy systems.
(2) Related Art
Spherical aberration is a common problem in microscopy systems. It is the result of differences in the focal points of light rays based on their differing distances from the center of a lens, i.e., the optic axis of the lens. In the apparatus 100 shown in FIG. 1, light rays 120 passing through a lens 110 at points further from its center have a focal point 130 closer to lens 110 than the focal point 132 formed by light rays 122 entering lens 110 at points closer to its center. Often, this is caused, entirely or in part, by a difference in the refractive indices of the lens immersion medium and the embedding medium of the sample. While light rays entering the lens at points further from its center are shown as having focal points closer to the lens, such rays could also have focal points further from the lens than do rays entering the lens at points closer to its center. The effects of spherical aberration include reductions in both the resolving power of the lens and the signal-to-noise ratio in collected data. Spherical aberration increases as one focuses further into a sample.
Spherical aberration is more problematic in spherical lenses. Lenses having parabolic surfaces have been shown to reduce or eliminate the effect of spherical aberration, but are not often used, due to their great expense. Other methods for correcting spherical aberration exist, including, for example, the use of a correction collar on the lens, use of a lens immersion medium with a refractive index matching that of the embedding medium, and altering the thickness of the coverslip. The difficulty with such methods, aside from the additional time and expense they require, is that spherical aberration often is not detected until after image data have been collected. Accordingly, the correction of spherical aberration frequently occurs after data collection. Traditionally, such correction has relied on the use of deconvolution algorithms, which enable manipulation of an acquired image to recover a more accurate representation of the imaged object. Such algorithms generally follow the equation:μn(x,y,z)=χn(x,y,z)*h(x,y,z)+b(x,y,z)+N(x,y,z)  (Eq. 1)where μn(x,y,z)≡acquired image;
χn(x,y,z)≡imaged object;
h(x,y,z)≡point spread function (PSF);
b(x,y,z)≡background level (primarily due to dark current);
N(x,y,z)≡random noise;
*≡convolution operator;
x,y≡in-plane spatial variables; and
z≡axial spatial variable.
Background level can be measured through a calibration protocol described by Holmes et al. (“Light Microscopic Images Reconstructed by Maximum Likelihood Deconvolution” in HANDBOOK OF BIOLOGICAL CONFOCAL MICROSCOPY, 2d ed. (1995)) and random noise can be statistically modeled. Thus, use of a deconvolution algorithm essentially involves the recovery of an imaged object (χn), given an acquired image (μn) and a calculated PSF (h).
Deconvolution algorithms generally follow one of four schemes. The first, described by Hopkins (The frequency response of a defocused optical system, Proc. R. Soc. A., 91-103 (1955)), constructs a PSF based on diffraction theory, using a generated pupil function. The second, described by Holmes (Maximum-likelihood image restoration adapted for noncoherent optical imaging, J. Opt. Soc. Am, 6:1006-1014 (1989)), assumes a measured PSF. The third, also described by Holmes (Blind deconvolution of quantum-limited incoherent imagery, J. Opt. Soc. Am., 9:1052-1061 (1992)), concurrently estimates the PSF and the imaged object with blind deconvolution. The fourth, described by Markham and Conchello (Parametric blind deconvolution: a robust method for the simultaneous estimation of image and blur, J. Opt. Soc. Am. 16: 2377-2391 (1999)), uses parametric blind deconvolution.
FIG. 2 shows the XZ plane of a PSF 200, wherein the acquired image of an imaged object 240 is spread, in part, across two halves 250, 252 of an hourglass pattern. Angle 260 (α) determines the overall shape of the hourglass pattern.
Depending upon the particular degree and direction of the spherical aberration, a greater proportion of the acquired image may be located in one half of the hourglass pattern than the other. For example, as shown in FIG. 3, three generateded images are shown in panels (a), (b), and (c), having spherical aberration coefficients of −15, 0, and 15, respectively.
In diffraction theory, the pupil function is central to the generation of a PSF. In the frequency domain, the optical transfer function (OTF) is given by the pupil function convolved with its conjugate. The OTF (in the frequency domain) and the PSF (in the spatial domain), in turn, have a Fourier transform relationship. Therefore, using the properties of the Fourier transform, a PSF can be calculated as the complex multiplication of f(x,y,z) and f*(x,y,z), where f(x,y,z) and f*(x,y,z) are the inverse Fourier transform and its conjugate of the pupil function, respectively.
A pupil function can be found according to the following equation provided by Hopkins (1955), supra:
                              F          ⁡                      (                          X              ,              Y                        )                          =                  {                                                                                                                ⅇ                                              {                                                  ⅈ                          ⁢                                                                                                          ⁢                                                      knAzρ                            2                                                                          }                                                                                                                                                ρ                        2                                            ≤                      1                                                                                                            0                                                                                                      ρ                        2                                            >                      1                                                                                  ⁢                                                          ⁢              where              ⁢                                                          ⁢                              ρ                2                                      =                                          x                2                            +                              y                2                                                                        (                  Eq          .                                          ⁢          2                )            
k is the wave number, defined as
      k    =                  2        ⁢        π            λ        ;
λ is the emission wavelength, typically between 350 and 1000 nm for optical sectioning;
z is the distance from the in-focus to the out-of-focus plane; and
A is a coefficient derived from a refraction index (n) and numerical aperture (NA).
The hourglass angle, α, of the PSF is determined by the size of the aperture (assumed to be circular) of the optical system and the distance between the optical system and the imaged object. This angle can be calculated according to the equation:
                              sin          ⁢                                          ⁢          α                =                  NA          n                                    (                  Eq          .                                          ⁢          3                )            where NA is the numerical aperture of the optical system; and
n is the refraction index.
In theory, the hourglass angle calculated by equation 3 is the same as the hourglass angle of the PSF calculated using equation 2. Often, however, this is not so. Equation 3 has been used, therefore, to validate the accuracy of the PSF calculated using equation 2.
In addition, in order for the pupil function calculated using equation 2 to be correct, two conditions must be met. First, X and Y must be normalized so that the circular pupil resides exactly within a unit circle. The normalization factor should be the bandwidth of the pupil function, given by the equation:
                              B          pupil                =                  NA          λ                                              (                      Eq            .                                                  ⁢            4                    )                ⁢                                      where NA is the numerical aperture; and
λ is the emission wavelength.
Second, the sampling density (Δx and Δy) in the spatial domain must satisfy the Nyquist criterion, i.e., max (Δx, Δy)≦(1/(2 Bpupil)). Because sampling density is determined by a user when acquiring data, the Nyquist criterion may not be satisfied. The effect of a sampling density not satisfying the Nyquist criterion is a PSF having a significantly smaller hourglass angle than it should.
A further difficulty in using equation 2 to calculate a PSF is that it assumes that PSFs are symmetric. Aberrations can cause a PSF to be asymmetric. Wilson (“The role of the pinhole in confocal imaging system” in HANDBOOK OF BIOLOGICAL CONFOCAL MICROSCOPY, 2d ed. (1995)) provides the following formula for calculating a pupil function useful in determining an asymmetric PSF:FSA(X, Y)=F(X,Y)×e{i2π×SA×ρ4}  (eq. 5)where
FSA(X,Y)≡ the pupil function for the spherically aberrated PSF;
F(x,y)≡ the symmetric pupil function calculated using Hopkins' formula above; and
SA≡ the coefficient for spherical aberrations.
In practice, however, none of the schemes mentioned above has proved satisfactory in correcting spherical aberration. PSFs constructed according to diffraction theory are generally not accurate; measured PSFs are difficult to obtain and unreliable; PSFs estimated concurrently with an estimation of the imaged object do not follow accurate parametric modeling of the PSF; and parametric blind deconvolution is prohibitively slow and requires enormous computing power.
Accordingly, a need exists for methods, systems, and program products that quickly detect spherical aberration, provide accurate PSF values, and utilize a robust deconvolution algorithm that incorporates those PSF values.