Optical imaging of objects emitting very low light levels, for example <106 photons/s/cm2, has applications ranging from scientific research to medical imaging. Biological objects of interest include fluorescently stained biological targets, phosphorescing objects, light passing through human tissue, surgical samples, in vitro experiments, and in vivo experiments.
One source of light of particular interest is the light emitted from charged particles passing through dielectric materials at a speed greater than the phase velocity of light in that material. This phenomenon is commonly known as Cerenkov luminescence. Radiopharmaceuticals which emit energetic alpha and beta particles can be a source of these Cerenkov photons. Radiopharmaceuticals can also be designed to accumulate at the site of a wide range of diseases. Therefore, the ability to quickly and accurately image Cerenkov luminescence has wide ranging application in the diagnosis and treatment of a variety of diseases.
In order to achieve this, high sensitivity cameras can be used to image the very weak light emitted from Cerenkov luminescence. Such cameras may be based around an electron multiplying charge-coupled device (emCCD), an intensified CCD, a photon multiplier tube (PMT) array or micro-channel plates with electron collection by one or more electrodes. Typically, the object to be imaged and the highly sensitive camera are enclosed in a substantially light-tight enclosure. An image of the object can then be reconstructed from the signal average over tens of seconds up to tens of hours of exposure, depending on the luminosity of the object.
Optionally, these same high sensitivity cameras may also take high resolution photographic (white light) images of an object with the aid of a conventional light source to illuminate the object. The two images may then be digitally superimposed onto each other to aid interpretation of the images. Many commercially available cameras sufficiently sensitive to image low luminosity samples are also capable of taking such polychromatic images. However, there is a significant delay experienced when switching between imaging modes, to allow for the necessary cooling and dissipation of charge needed for high sensitivity imaging. This can be difficult to achieve in clinical/surgical environments. Additionally, a lens which is optimal for high-sensitivity imaging may not be suitable for white-light imaging and vice versa.
In order to overcome these issues, it is possible to introduce a second, high resolution, camera into the optical system. Doing so however introduces several technical challenges into the design of the optical path, as discussed in related patent application publication number WO2014020360.
For example, because the time required to capture an image of a given signal to noise ratio is proportional to the inverse of the photon intensity squared, in order to capture an image of a low luminosity sample in a reasonable/quick/useable time frame, it is imperative to capture as many of the emitted photons as possible. The above-described imaging systems have a number of problems to overcome to achieve this.
Two related parameters which affect the proportion of photons received at the imaging chip must be optimised. Firstly, the larger the lens aperture, the greater the amount of collected incident light. The amount of light collected rises proportionally with an increase in the diameter of the aperture squared. Ideally a large aperture lens (f/#<1, where f/#=(2 sin(φ)−1, and φ is the cone half-angle at the image) is therefore required. Secondly, it is desirable to adjust the optical system so that the region of interest of the sample fills as much of the field of view of the lens/camera as possible, in order to ensure that the imaging chip is fully utilised.
One problem with the above parameters is that as the aperture size of a lens increases, the depth of field of the resultant image reduces proportionally. In a system with a very large depth of field, for example, the location of the sample is not important as it will be acceptably in focus across a range of positions. As the depth of field decreases however, the range of acceptable focus rapidly narrows. Thus, with a very large aperture lens (f<1), the depth of field can be as narrow as 2 mm (depth of field of a 17 mm f/0.95 lens focussing at 15 cm using a sensor consisting of 16 μm pixels). Therefore, using a large aperture lens to capture as much light as possible means the object distance must be very accurately known as the lens focus will need to be frequently adjusted.
In a conventional imaging system, the object distance can be determined by viewing a captured image and adjusting for best focus. This can be done automatically by, for example, taking the first spatial derivative of an image, adjusting the axial position of the lens, and then taking further images until a maximum in the derivative image is achieved. As discussed above, the typical integration time for a low light image is tens of seconds to hours, which means that it is not possible to determine the object distance in this way. Therefore, known methods of automatically focusing a lens are not suitable for low light imaging systems as described above.
Moreover, large aperture lenses also give rise to significant problems in altering the field of view. Altering the field of view (magnification) ensures that the region of interest of the sample fills as much of the field of view of the lens/camera as possible. The field of view of a camera system is determined by three parameters—the size of the imaging chip, the focal length of the imaging lens and the distance from the imaging lens to the object; wherein the focal length and distance from the imaging lens to the object determines the image magnification.
In a conventional imaging system, the field of view is readily varied by the movement of a series of imaging lenses which varies the focal length. The series of imaging lenses in combination are commonly known as a “zoom lens”. As the aperture size increases in relation to the focal length of the lens, the number and size of lens elements increases to correct for, inter alia, spherical and chromatic aberrations. Consequently, the complexity of the zoom lens rises exponentially to the size of the aperture. Because of this requisite complexity, zoom lenses with very large apertures are not commercially available/viable.
In the absence of a zoom lens, the field of view can be varied by cycling through a series of lenses with set focal lengths (i.e. a turret lens system). For similar reasons as to the absence of low f-number zoom lenses, the commercial availability of low f-number fixed lenses with different focal lengths is also very limited. With the shallow depth of field required for optimal light collection, the number of lenses required for a turret system is unfeasibly large, notwithstanding the fact that the variety of lenses are not currently commercially available.
One way of avoiding the need for complex lens systems is to vary the distance between the sample and the high sensitivity camera, i.e. the object distance, and re-focussing the lens for the new object position. There exist a number of significant technical problems associated with varying the object distance and refocusing a high sensitivity camera in a low light clinical/surgical environment.
While in conventional imaging systems the object and the camera can easily be moved, high-sensitivity imaging systems deployed in clinical or surgical environments pose particular problems in this regard, as users typically do not have time to make such adjustments. Moreover, even if the object distance can been adjusted, the problem remains of how to quickly re-focus the high sensitivity camera in a low light environment, when the time needed to take a single image is tens of seconds to hours.