Recently, light-microscopic imaging methods have been developed, by means of which sample structures which are smaller than the diffraction-dependent resolution limit of conventional light microscopes can be represented, on the basis of sequential, stochastic localization of individual markers, in particular fluorescent molecules. Such methods are described for example in WO 2006/127692 A2; DE 10 2006 021 317 B3; WO 2007/128434 A1, US 2009/0134342 A1; DE 10 2008 024 568 A1; WO 2008/091296 A2; “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)”, Nature Methods 3, 793-796 (2006), M. J. Rust, M. Bates, X. Zhuang; “Resolution of Lambda/10 in fluorescence microscopy using fast single molecule photo-switching”, Geisler C. et al, Appl. Phys. A, 88, 223-226 (2007). This new branch of microscopy is also referred to as localization microscopy. The methods used are known in the literature for example by the names (F)PALM ((fluorescence) photoactivation localization microscopy), PALMIRA (PALM with independently running acquisition), GSD(IM) (ground state depletion individual molecule return) microscopy) or (F)STORM ((fluorescence) stochastic optical reconstruction microscopy).
What the new methods have in common is that the sample structures to be imaged are prepared using point objects, referred to as markers, which have two distinguishable states, namely a “bright” state and a “dark” state. If for example fluorescent dyes are used as markers, the bright state is a state capable of fluorescence and the dark state is a state incapable of fluorescence. In preferred embodiments, as in WO 2008/091296 A2 and WO 2006/127692 A2 for example, photoswitchable or photoactivatable fluorescent molecules are used. Alternatively, as in DE 10 2006 021 317 B3 for example, inherent dark states of standard fluorescent molecules can be used.
In order to image sample structures at a resolution which is higher than the conventional resolution limit of the imaging optical unit, now a small subset of the markers is repeatedly converted into the bright state. In so doing, in the simplest case the density of the markers forming this active subset is to be selected such that the average distance between adjacent markers in the bright, and therefore light-microscopically imageable, state is greater than the resolution limit of the imaging optical unit. The markers forming the active subset are imaged onto a spatially resolving light detector, for example a CCD camera, so that a light distribution in the form of a light spot is detected for each punctiform marker, the size of which spot is determined by the resolution limit of the optical unit.
In this manner, a large number of individual raw data images are taken, in each of which a different active subset is imaged. In an image evaluation process, the centroid positions of the light distributions which represent the punctiform markers present in the bright state are then determined in each individual raw data image. The centroid positions of the light distributions which are ascertained from the individual raw data images are then collated in an overall representation in the form of an overall image data set. The high-resolution overall image produced by this overall representation reflects the distribution of the markers.
For representative reproduction of the sample structure which is to be imaged, a sufficiently large number of marker signals has to be detected. However, since the number of evaluable markers in the respectively active subset is limited, a very large number of individual raw data images have to be taken in order to image the sample structure in its entirety. Typically, the number of individual raw data images is in a range of several tens of thousands, this range varying greatly, since far more images have to be taken for complex structures than for simpler structures in order to be able to resolve the structures.
In addition to the lateral determination of the position of the markers in the object plane (also referred to below as x-y-plane) which is described above, a determination of the position in the axial direction (also referred to below as z-direction) may also take place. “Axial direction” in this case means the direction along the optical axis of the imaging optical unit, i.e. the main direction of propagation of the light.
Three-dimensional localizations are known from what are called “particle-tracking” experiments, as are described in Kajo et al., 1994, Biophysical Journal, 67, Holtzer et al., 2007, Applied Physics Letters, 90 and Toprak et al., 2007, Nano Letters, 7(7). Said localizations have also already been used in image-generating methods which are based on the switching and localization of individual molecules which has been described above. In this regard, reference is made to Huang et al, 2008, Science, 319 and Juette et al., 2008, Nature Methods. With regard to the prior art, reference is further made to Pavani et al., 2009, PNAS, 106.
A punctiform object can be localized in the z-direction in principle in that the change of a light spot detected on the detection surface of the camera is evaluated, this change becoming visible if the point object moves out of the plane of sharpness or focal plane which is optically conjugate with the detection surface. In this case, a point object is to be understood in what follows to mean an object, the dimensions of which are smaller than the diffraction-dependent resolution limit of the imaging optical unit, in particular of the detection objective. In this case, the detection objective images such an object in the form of a three-dimensional focus light distribution in the image space. The focus light distribution generates a light spot on the detection surface of the camera, which spot is portrayed by what is known as the “point-spread function”, or PSF for short. If the point object is now moved in the z-direction through the focus, i.e. perpendicularly to the plane of sharpness, the size and shape of the PSF change. If the detection signal which corresponds to the detected light spot is analyzed with regard to the size and shape of the PSF, the actual z-position of the object can be concluded therefrom.
If the point object is located too far away from the plane of sharpness, the light spot generated on the detection surface of the camera is so fuzzy that the corresponding measuring signal is no longer perceptible within the usual measurement noise. Therefore, in the object space, there is a region in the z-direction around the central focal plane or plane of sharpness within which a point object generates a light spot on the detection surface which is still sharp enough to be able to be evaluated in order to localize the point object in the z-direction. This region in the z-direction containing the plane of sharpness is referred to below as “depth of field range”.
In the case of three-dimensional localization, there is however the fundamental problem that the PSF originating from a point object is symmetrical with respect to the detection surface. This means that although the PSF changes if the point object is moved out of the plane of sharpness, so that the distance of the point object from the plane of sharpness can be determined, the change in the PSF is symmetrical on both sides of the plane of sharpness, so that it is not possible to decide on which side of the plane of sharpness the point object is located within the depth of field range.
Various methods are known for resolving the problem discussed above. Examples are methods which are referred to among experts as the “astigmatism method” (the above-mentioned documents Kajo et al., Holtzer et al. and Huang et al.), “biplane method” (cf. Toprak et al. and Juette et al.) and “double helix method” (cf. Pavani et al.). What these methods have in common is that, in order to localize the point object in the z-direction, the light spot generated on a detector is analyzed in order to determine a parameter, and a z-position of the point object is associated with this parameter. This association takes place using association information determined in advance which relates the parameter to the z-position of the point object. As a parameter, for example as in the astigmatism method, a quantity is considered which portrays the shape of the light spot, or, as in the case of the biplane method, a quantity is considered which relates the extensions of two light spots to one another, which extensions originate from the same light spot and are generated on detection surfaces the associated planes of sharpness of which are offset from each other in the z-direction in the object space.
In localization microscopy, in which resolutions of far below 100 nm, sometimes even into the region of a few nm, are achieved, optical imaging errors, which inevitably occur in every imaging optical unit, now constitute a considerable problem. Whereas in conventional, diffraction-limited microscopy, in which resolutions, measured in the object space, approximately in the region of 250 nm are obtained, the imaging errors can be sufficiently minimized by precision lens manufacture or additional corrective elements, this has hitherto not been readily possible in localization microscopy. In this case, the resolution is so high that the remaining imaging errors are of considerable relevance. Examples of such imaging errors are chromatic aberrations, spherical aberrations or lateral field distortions, i.e. imaging errors which lead to distortion of the PSF in a plane perpendicular to the optical axis. One example of a lateral field distortion is the coma.