The present invention relates generally to optical imaging systems, and more particularly to a system and method for measuring wavefront errors and correcting the optics within multi-aperture adaptive optical systems based on a neural network.
Imaging systems, particularly those that are very precise, have a continuous need to be aligned. This is particularly true for multi-aperture imaging systems such as those which implement an array of phased telescopes for image collection. Alignment generally is used to reduce or eliminate optical wavefront errors that are introduced as telescope parameters drift from optimal positions due to, for example, temperature drift, vibration, component shift or deformation.
Adaptive optics are capable of removing wavefront errors, but only if an accurate measurement of the wavefront is available. Therefore, a wavefront sensor must be incorporated into the imaging system.
Wavefront sensors, such as Hartmann wavefront sensors, make measurements from point sources such as a star or a laser beacon. Hartmann wavefront sensors require the use of locally generated reference beams in order to measure quantities such as image intensity or wavefront phase. However, the use of additional reference beams is undesirable as they add to the overall complexity of the wavefront sensor and can introduce additional sources of scattered light which can have an adverse affect on the measurements of interest.
Wavefront sensors which use the object scene to provide information on the optical aberrations of an imaging system are more desirable than those which use reference beams for some applications. Other techniques include the use of phase retrieval and shearing interferometry. The latter technique is optically complex and requires reimaging optics for measurement of the wavefront at a pupil. The former technique requires some knowledge of the object scene such as the location of isolated point sources within the imaged field of view.
Phase diversity is an extension of the phase retrieval concept whereby two images, one containing an additional known aberration, are compared to determine the optical-system aberrations. Phase diversity algorithms that are independent of the object-scene content can be defined, making them useful for a broad range of adaptive optics applications.
There is a need for a multi-aperture imaging system that uses the object scene from the imaging system to determine and correct wavefront errors, independent of the objects of the scene. There is a further need for a system and method for reducing the complexity of collecting and analyzing the images from a multi-aperture imaging system so that corrections to parameters of the imaging system can be identified and made many times per second.
A prior art phase diversity system (as described in xe2x80x9cJoint estimation of object and aberrations by using phase diversity,xe2x80x9d by R. Paxman, T. Schulz and J. Fienup, J. Opt. Soc. Am. A, Vol. 9, No. 7, pp. 1072-1085 (July 1992)) assesses parameter errors by comparing in-focus and out-of-focus images using a model of the optical system. Their approach requires iterations of the system model to match each frame of data. The present invention allows us to perform the iterative matching off-line during the neural network training so that on-line estimates can be made at a high frame rate without iteration.
The present invention is directed to a phase diversity wavefront correction system for use in multi-aperture optical imaging systems. A phase diversity sensor within the imaging system forms an in-focus image as a composite, focused image from the multiple apertures of the system and also forms an additional image which is deliberately made out of focus to a known extent. Taken together, the two images are processed to create one or more metrics, such as the power metric and sharpness metric. These metrics may be created, for example, according to the well known method of Gonsalves, which is described in xe2x80x9cPhase Retrieval and Diversity in Adaptive Optics,xe2x80x9d R. A. Gonsalves, Opt. Eng. 21, 829-832 (1982).
Neural networks are provided, each having an output corresponding to a parameter of an aperture of the imaging system. The aperture may be a telescope in a system of telescopes acting as one, and the parameter may be a piston position (axial displacement) or tip/tilt (angular displacement) of one telescope with respect to the others in the system. Image quality depends critically on correct values of piston and tilt. The neural networks each correspond to one parameter of a telescope or a combinations of parameters. They are trained to identify a subset of elements within the metrics that, when input into the network, produce the best estimate of the piston or tip/tilt position relative to a reference telescope or an estimate of a combination of parameters, such as the average of a subset of telescopes. Then, during active use of the system, metrics generated from the in-focus and out-of-focus images of the object scene and the trained neural networks are used to provide estimates of piston and/or tip/tilt positions which are in turn used to drive the pistons and/or tip/tilt controllers to correct for aberrant movement and keep the telescopes phased. If desired, a measure of the reliability of the estimates may be used to determine whether the estimate should be used.