In the recent past, light-microscopy methods have been developed with which, based on a sequential, stochastic localization of individual markers, in particular fluorescent molecules, it is possible to display specimen structures that are smaller than the diffraction-limited resolution limit of classic light microscopes. Such methods are described, for example, in WO 2006/127692 A2; DE 10 2006 021 317 B3; WO 2007/128434 A1, US 2009/0134342 A1; DE 10 2008 024 568 A1; WO 2008/091296 A2; “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nature Methods 3, 793-796 (2006), M. J. Rust, M. Bates, X. Zhuang; “Resolution of Lambda/10 in fluorescence microscopy using fast single molecule photo-switching,” Geisler C. et al, Appl. Phys. A, 88, 223-226 (2007). This new branch of microscopy is also referred to as “localization microscopy.” The methods applied are known in the literature, for example, under the designations (F)PALM ((fluorescence) photoactivation localization microscopy), PALMIRA (PALM with independently running acquisition), GSD(IM) (ground state depletion (individual molecule return) microscopy), or (F)STORM ((fluorescence) stochastic optical reconstruction microscopy).
The new methods have in common the fact that the specimen structures to be imaged are prepared with point-like objects, called markers, that possess two distinguishable states, namely a “bright” state and a “dark” state. For example, if fluorescent dyes are used as markers, the bright state is then a fluorescence-capable state and the dark state is a non-fluorescence-capable state.
In preferred embodiments, for example in WO 2008/091296 A2 and WO 2006/127692 A2, photo-switchable or photoactivatable fluorescent molecules are used. Alternatively, as e.g. in DE 10 2006 021 317 B3, inherent dark states of standard fluorescent molecules can be used.
In order for specimen structures to be imaged at a resolution that is higher than the classic resolution limit of the image-producing optical system, a small subset of the markers is then repeatedly transferred into the bright state. The density of the markers constituting this active subset must be selected so that the average spacing between adjacent markers in the bright state and thus the state that can be imaged by light microscopy is greater than the resolution limit of the imaging optical system. The markers constituting the active subset are imaged onto a spatially resolving light detector, e.g. a CCD camera, so that a light distribution of each point-like marker is sensed in the form of a light spot whose size is determined by the resolution limit of the optical system.
A plurality of individual raw-data images are sensed in this manner, in each of which a different active subset is imaged. In an image analysis process, the center-point positions of the light distributions, representing the point-like markers that are in the bright state, are then determined in each individual raw-data image. The center-point positions of the light distributions ascertained from the individual raw-data images are then combined into one overall depiction in the form of an overall image data set. The high-resolution overall image produced by this overall depiction reflects the distribution of the markers.
For a representative reproduction of the specimen structure to be imaged, a sufficiently large number of marker signals must be detected. But because the number of markers in each active subset is limited by the minimum average spacing that must exist between two markers in the bright state, a very large number of individual raw-data images must be sensed in order to image the specimen structure completely. The number of individual raw-data images is typically in a range from 10,000 to 100,000.
In addition to the above-described determination of the lateral position of the markers in the object plane (hereinafter also referred to as the X-Y plane), a position determination in an axial direction (hereinafter also referred to as the Z direction) can also occur. The axial direction here means the direction in the optical axis of the image-producing system, i.e. the principal propagation direction of the light.
Three-dimensional localizations are known from so-called “particle tracking” experiments, such as those described in Kajo et al., 1994, Biophysical Journal, 67, Holtzer et al., 2007, Applied Physics Letters, 90, and Toprak et al., 2007, Nano Letters, 7(7). They have also been utilized already in image-producing methods that are based on the above-described switching and localization of individual molecules. Reference is made here to Huang et al., 2008, Science, 319, and Juette et al., 2008, Nature Methods. Reference regarding the existing art is further made to Bossi et al., 2008, Nano Letters, 8(8), 2463-2468 and to Pavani et al., 2009, PNAS, 106.
Localization of a point-like object in the Z direction can be accomplished in principle by evaluating that change in a light spot sensed on the detection surface of the camera which becomes visible when the point-like object moves out of the sharpness plane or focal plane optically conjugated with the detection surface. A point-like object is to be understood hereinafter as an object whose dimensions are smaller than the diffraction-limited resolution limit of the image-producing system, in particular of the detection objective. In this case the detection objective images an object of this kind into the image space in the form of a three-dimensional focus light distribution. The focus light distribution generates on the detection surface of the camera a light spot that is also referred to in the literature as a “point spread function,” abbreviated PSF. If the point-like object is then moved through the focus in the Z direction, i.e. perpendicular to the plane of sharpness, the size and shape of the PSF then change. By analyzing the detected signal corresponding to the sensed light spot in terms of the size and shape of the PSF, inferences can be drawn as to the actual Z position of the object.
In the context of a three-dimensional localization, however, the basic problem exists that the PSF deriving from a point-like object is symmetrically in terms of the detection plane. This means that although the PSF changes when the point-like object is moved out of the sharpness plane, so that the spacing of the object from the plane of sharpness can be determined, the change in the PSF is nevertheless symmetrical on either side of the sharpness plane, so that it is impossible to decide which side of the sharpness plane the point-like object is located on.
Three-dimensional localization of point-like objects becomes even more difficult when so-called multicolor measurements needs to be carried out, in which the specimen is marked with different dyes and the detected signals sensed for these different dyes must be separated. Multicolor measurements are helpful in particular for co-localization of structures, for example proteins, inside a cell. If it is possible to separate the detected signals of the different dyes, a conclusion can then be drawn as to the respective distribution of the dyes and thus of the various structures.
A fundamental problem that occurs in high-resolution localization microscopy in the context of multicolor measurements is the so-called longitudinal chromatic aberration (also called axial chromatic aberration) exhibited to a certain degree by every detection objective. The longitudinal chromatic aberration is understood as an aberration which causes an object that emits light of different colors to be imaged in different image planes as a function of wavelength. This is illustrated in FIG. 1, in which it is assumed that a point-like object 10 is emitting light of two wavelengths (depicted in FIG. 1 respectively as a dashed and a dotted line and labeled 12 and 14). The longitudinal chromatic aberration that occurs in a detection optical system constituted by an objective 16 and a tube lens 18 now causes the focus light distributions generated by the detection optical system for the different wavelengths to be offset from one another in the Z direction. The center points of the mutually offset focus light distributions are labeled 20 and 22 in FIG. 1.
Longitudinal chromatic aberration is caused by the fact that the lens material exhibits dispersions, i.e. possesses different refractive indices for different wavelengths of light. Longitudinal chromatic aberration can be decreased to a certain extent by a skillful lens design. Typical high-performance objectives, for example, possess a longitudinal chromatic aberration of 150 to 200 nm, which means that a blue dye and a red dye that are located in the same plane of sharpness will be imaged with an offset of 150 to 200 nm from one another in the Z direction.
Longitudinal chromatic aberrations in the range recited above are tolerable in conventional microscopy, since the resolutions achievable in the Z direction are in any case only in a range from 600 to 1000 nm. On the other hand, however, very much better resolution values are now being achieved in localization microscopy, based on the detection of individual molecules. Resolutions below 50 nm, for example, are achievable. At such resolutions, a longitudinal chromatic aberration of the magnitude recited above is no longer tolerable.