The three-dimensional (3D) nature of thick specimen presents a difficult challenge for imaging with two-dimensional (2D) detectors. Proper interpretation of thick microscopic specimen requires not only high resolution imaging of structures in the plane of focus, but also the context of the high resolution image of the overall structure. Ideally, an observer should be able to easily perceive the relative axial position of structures outside the focal plane and the overall structure presented with high depth of field.
Traditional microscopy methods can not simultaneously provide high resolution and high depth of field. Nor can they distinguish between structures that are in front and in back of the focal plane. Recent optical sectioning methods (i.e. confocal, deconvolution, and two-photon microscopy) allow for 3D reconstruction. But they typically require the sequential acquisition of a stack of 2D images at different focal planes. For example, confocal microscopy extracts 3D information from a z-stack of many optical sections at different level of focus, and the pinhole rejects the vast majority of the light gathered by the objective. This is an inefficient process that leads to limitations in certain circumstances. The most serious problems are phototoxicity when imaging live tissue, photobleaching of weakly fluorescent specimens, and slow image acquisition of thick specimen. Furthermore, the costs and maintenance needs of confocal and two photon microscopes render them less accessible to many users.
Software techniques, e.g., using 3D deconvolution of a large number of coupled 2D images taken at different focal plane, have been suggested as a less expensive option. In this approach, a set of images are taken focusing at various depths by moving the slide up and down or changing the focus. The images of the object in each plane are related to images of objects in other planes, because the image from any focal plane contains light from points located in that plane as well as blurred light from points in other focal planes. There are many varieties of deconvolution algorithms using point spread function. A blind deconvolution method that uses a filter function to resemble the point-spread-function is also available. The disadvantages of this approach are the long reconstruction time, uncertain image convergence and the prerequisite of a z-stack of images, which requires long image acquisition time and causes photobleaching and phototoxicity.
Hence, there remains a need for a new approach to extracting comparable 3D information from a single focal plane. Such approach might improve dramatically image acquisition efficiency, in terms of time and light exposure, and avoids photobleaching, phototoxicity and effects of motion.