The technical requirements that must be met by the recording, processing and reproduction of images keep growing with an increasing demand for information and their clear graphic visualization. The rapid advance in these fields goes hand-in-hand with the ever accelerating image processing by computers.
Today, the main field where electronic image processing is applied, involves the further processing of images that are taken by cameras, scanning systems and sensors in the visible light spectrum as well as in other sections of the electromagnetic spectrum such as the infrared, the radio frequency and the X-ray frequency range. After electronic processing, the images are reproduced either as individual images or as moving images on an image reproduction screen such as a display for presenting the information to the eye.
On the one hand it is possible to make special image contents easier recognizable by electronic image processing. Known techniques for this purpose include, for example local frequency filtering, margin sharpness increasing, image data compression, image correlation, dynamic reductions and false color coding. On the other hand, other techniques are concerned with the superposition or subtraction of auxiliary images taken from different spectral ranges or with the superimposing of stored plans, maps, and drawings onto the original image.
For many applications an image presentation practically free of time lag is of great advantage to the eye, for example when operating an aircraft, ship, vehicle, or in an open loop control and monitoring of processes and assembly lines. By applying image processing the information content of the actual, direct image can be intentionally increased or reduced. Image processing is used in a wide range from increasing the image contrast to blending-in of additional information, marking of details, and highlighting dangers.
In many of these applications, it is disadvantageous that the electronic camera is a "second eye system" separate from the human eye. This disadvantage is due to the fact that the images are seen from another recording location and that additionally, the pictures on the image screen are presented at another observation location than the eye. Thus, the human eye must constantly change between the direct observation and the indirect observation while taking into account different observation angles, different image details, and different size ratios which leads to physical impairments and delays when decisions must be made.
The above problems have in part been solved by the "head-up-display (HUD)" technique used in the piloting of combat aircraft, in that important informations such as instrument displays and target data are inserted or fade-in into the open spectacles of the pilot's helmet and thus into the visual field of the pilot. This technique is also used experimentally in the automobile industry for displaying of instrument readings on the windshield so that the driver is not distracted from viewing the road by viewing the instrument panel.
The HUD technique has been further developed in the so-called "virtual reality" or "cyber space" technique, wherein closed spectacles are used, i.e. glasses in which the outside view is blocked, and three-dimensional full images moved by the HUD, are projected into the eye with virtual reality. These virtual reality images are then modified in response to body motions such as locomotion, movement of an arm, a finger, or head and eye movements.
The HUD technique generates an image on an image screen and projects the image into the eye after reflection on the surface of the spectacle glasses. The eye sees, so to speak, through the glasses as full mirrors onto the display "around the corner". Where open spectacles are used, a partially transmitting mirror permits the simultaneous viewing of the outside environment. Since the display is connected to the head, the image follows the head movements.
Certain HUD devices are equipped with an "eye tracker" which follows the eye movements with the help of a motion sensor applied to the eyeball or with a camera which observes the movements of the eye pupils or of the vascular structure of the retina. It is thus possible to electronically shift the image projected in the HUD device, corresponding to the these movements within the visual field.
It is possible in a HUD device to project the image through the projection optic into "infinity" in order to relax the eye free of accommodation. By adjusting different view angles for both eyes toward the same object, a stereoscopic, i.e. three-dimensional vision, is possible.
On the one hand these applications and techniques illustrate the high level of the electronic image processing which is capable to process moving images with an acceptable quality almost without time lag and with a reasonable technical effort and expense. On the other hand, these techniques also illustrate the increasing demand for a direct image transmission into the eye.
However, there are limits to current HUD techniques. The accuracy or precision of the automatic tracking of the eye movements with the eye tracker is substantially worse than the alignment precision and image resolution of the eye. As a result, the fade-in image floats or dances in the visual field which leads to an unprecise target acquisition and is tiring to the eyes.
For the above reasons, conventional applications of the full image reproduction are limited to the use of closed spectacles, i.e. to the exclusive fade-in of external images. Contrary thereto, when open spectacles are used, permitting an additional external view, the fade-in is still limited to simple additional information in the form of text, symbols, or image contours.
A complete three-dimensional and timely overlap between fade-in images and the real image seen by the eye requires an exact three-dimensional and timely coincidence of the two images on the retina. It is the aim of the invention to achieve this coincidence by a direct recording or photographing of the retina image and then projecting the new image back onto the real image substantially without any time lag and with congruence.
First, the prior art will be discussed as far as it relates to the recording of retina reflex images, to image scanning in the internal eye and the projection of laser images directly into the eye. The invention starts from this prior art.
The technical realization of a continuous imaging of the retina reflex of the environment or exterior requires that the optical reflex of the retina is actually usable. The reflection capability of the retina has been measured in detail, for example by F. C. Delori and K. P. Pflibsen in an article entitled "Spectral Reflectance of the Human Ocular Fundus" which appeared in "Applied Optics", Vol. 28, No. 6, 1989. The reflection capability of the fovea centralis of the retina has a low value of 0.2% at the blue visual spectral range (450 nm) and increases monotonously to a value of 10% at the long wave red range (750 nm). In the range of the largest eye sensitivity and the most acute vision, namely in the green/yellow range between 500 nm and 600 nm the reflection capability is within 1 and 2%.
Thus, a recording system for this reflection capability must be constructed for an illumination density of the retina that is smaller by a factor of 50 to 100 compared to the illumination density of the objects seen by the eye. A further impairment of the available light quantity is due to the size of the eye pupil of 1 to 7 mm, which is, compared to conventional technical recording systems such as photographic and video cameras, relatively small. The recording of the light reflected by the retina thus requires, due to these two reasons, an especially sensitive light sensor.
It is known that a structured reflex image is generated in the area of the fovea centralis of the retina when an image is formed in the eye. This phenomenon is described, for example, by F. W. Campbell and D. G. Green in an article entitled "Optical and Retinal Factors Affecting Visual Resolution", published in the Journal of Physiology, No. 181, page 576 to 593, (1965). Campbell and Green projected a brightly lit extensive grid structure onto the retina and the image reflected by the eye was deflected with a beam splitter mirror out of the beam path and imaged with a sharp focus outside of the eye on an image plane (screen). The surface imaging of the grid structure was used after its reflection by the retina, that is after passing twice through the eye, served for the determination of the modulation transfer function of the eye. The photometric evaluation showed that the quality of the reflex image came very close to the image quality seen by the eye.
The closed, static recording device used by Campbell employed an extremely high image illumination by photoflash with the eye in a fixed position. Such a device is not suitable for recording weakly illuminated dynamic exterior images on the retina while the rapid natural eye movements take place. For this purpose light sensitive rapid sensors are required together with a recording technique which sufficiently suppresses extraneous light in the open beam path and which is also capable of recording images at least with the repetition frequency of costomary video standards.
CCD cameras which record all image dots in parallel with a fixed integration time and serially scanning image recording systems with individual detectors (photodiodes or photomultipliers) are suitable for these purposes. Serial scanning involves sensing the image dots in time, one after the other. Both of these techniques are adapted to customary video standards.
A basic advantage of using the CCD recording technique is the long integration time in each image dot or pixel of, for example 20 ms compared to the short residence time in each pixel of only 40 ns during scanning. However, the serial recording technique has a number of other advantages in connection with the recording of very weak, rapidly changing light signals against a strong background noise as compared to the parallel recording technique. These other advantages make up for the short integration time.
These other advantages are:
a serial signal processing which makes possible a direct analog further processing of the image in real time; PA1 an efficient suppression of scattered light by the small momentary visual field during scanning; PA1 a high preamplification with low background noise by the employed avalanche photodiodes and photomultipliers; PA1 a high signal dynamic which enhances the large variations of the picture brightness on the retina; PA1 an efficient analog background noise suppression, for example by the phase-lock-in detection or by signal correlation; and PA1 a simple correction of imaging or recording errors.
With regard to the object of the invention, the critical advantage of a serial image scanning is the further possibility to combine such image scanning with a time lag, synchronous, serial laser image projection into the eye.
Due to these advantages of the serial scanning, as compared to film and video recordings, the serial scanning is used since the early fifties (1950), primarily for the recording of microscope images. Three recording methods can be applied with a serial scanning. A first "flying-spot" recording is achieved by an area illumination of the object and a spot-type or pixel-type scanning with a photosensor (photoreceiver). The second method also referred to as "flying-spot" involves scanning the object with a point light source and surface area pick-up with a photosensor. The third method referred to as "confocal scanning" involves a spot illumination and a simultaneous spot pick-up with a photosensor. The same scanning device can be used for spot illumination and spot pick-up.
In practicing the first two methods, either the light source or the sensor is rigidly mounted, whereby either the sensor or the source is movable on the object. In the third method the light source and the receiver (sensor) are together depicted (confocally) on the spot to be scanned. In this confocal method the light source and receiver are held in a fixed position relative to each other.
In order to highlight the novel merits of the invention and its technical embodiments, the current status of the applications of image recordings and laser projections into the eye will now be explained in more detail.
U.S. Pat. 4,213,678 (O. Pomerntzeff and R. H. Web) (September 1980) discloses for the first time the second type of the "flying-spot" recording technique with the aid of a scanned laser beam used as an illumination source and a rigid large area photomultiplier used as a sensor or receiver for the pick-up or recording of the inner structure of the eye. These components are part of a scanning ophthalmoscope for examining the fundus of the eye.
An article by R. H. Web, G. W. Hughes, and F. C. Delori entitled "Confocal Scanning Laser Ophthalmoscope", published in "Applied Optics", Vol. 26, No. 8, pages 1492 to 1499 (1987), describes a further development of the above technique to a confocal arrangement with the simultaneous scanning of the laser beam and the receiver axis of the photomultiplier.
In the apparatus of Web, Hughes, and Delore, the retina is scanned by a laser beam in a raster pattern. The laser beam illuminates the object pixel-by-pixel (dot-by-dot) and line-by-line. The photosensor (photomultiplier) measures the respective reflected light and transforms the measured value sequence into a video signal. A television monitor presents the video signal eventually as an image. These three operation steps take place in exact synchronism. While the laser beam scans line-for-line the eye background, the television signal is simultaneously assembled.
The laser beam passes first through a modulator by which the illumination intensity can be controlled in an open loop manner. The horizontal line deflection is generally performed by a rapidly rotating polygonal mirror while the vertical deflection is performed by a swinging mirror. The pivot point of the scanning motion is located in the pupil plane of the eye. The light reflected by the eye background or rather scattered by the eye background is collected over the entire pupil opening and supplied to the photoreceiver or sensor through an imaging optic. In this manner the beam deflection is again cancelled and a stationary light or beam bundle is obtained which is imaged or recorded on a small detector surface.
Web, Hughes, and Delori recognized in the above mentioned article the possibility of using a confocal imaging in an ophthalmoscope for projecting artificial images with the aid of a laser projection into the eye. This possibility was described as follows: "The laser beam is deflected by a fast (15-kHZ) horizontal scanner and a slow (60-Hz) vertical scanner to project a standard format TV raster on the retina. Modulation of the beam permits projection of graphics or even gray scale pictures in the raster. While the patient is seeing the TV picture projected on his/her retina, an image of the retina is displayed on a TV monitor."
The direct projection of modulated light stimuli and patterns is used in modern laser scan ophthalmoscopes as, for example, are manufactured by the firm "Rodenstock" of Munich. These ophthalmoscopes are primarily used for line of sight analysis, video line of sight determinations, and contrast sensitivity measurements, whereby respectively only one laser wavelength is used.
Further proposals for the direct image transmission into the eye by lasers are known from the following two publications.
European Patent Publication 0,473,343 B1 of Nov. 19, 1995 (Sony Corporation) discloses a "direct viewing picture image display apparatus". This direct viewing display apparatus employs substantially only the known technical solutions as described above involving confocal imaging. These confocal imaging techniques have been realized in laser scanning ophthalmoscopes that are produced, for example by the firm of Rodenstock Instruments of Munich and such ophthalmoscopes are on the market.
The technical solution for expanding the image transmission of but one color out of three colors, is described in claims 10 and 11 of European Patent Publication 0,473,343 B1. Such a technique has also been used in other laser displays for many years. The shifting of the depth position of the images on the retina as described in claims 12 to 16 of said patent has been applied in the form of similar measures in existing equipment.
The separation of two beams by distinguishing polarizations as described in claims 16 to 19 and shown in FIG. 6 of the above mentioned Sony Patent, in order to project the same image into both eyes, is basically unsuitable for a "true" three-dimensional image presentation, because these images do not have any perspective differences. Further, said method does not permit any dynamic nor any individual adaptation to the eye alignment and thus it is difficult to realize this teaching in practice.
European Patent Publication 0,562,742 A1 (Motorola, Inc.), published in August, 1993, describes a direct view image display (apparatus) referred to as "direct retinal scan display" which involves the direct image transmission onto the retina as in the above described patent to Sony, however, with the difference the projection is accomplished by deflection through spectacles worn by a person.
The Motorola disclosure does not add new solution proposals to the existing prior art. The direct mounting of the entire display on the head of the user as defined in claim 4 or the deflection of the beam path of the projector through spectacles as defined in claim 5, has been realized in so-called "virtual reality" spectacles or in pilot helmets equipped with a "head-up display".
It is necessary to satisfy different optical requirements that must be met by the laser beam deflection in order to successfully project an image onto the retina. Such laser beam deflection requires, in addition to the special construction of the beam guiding elements following the beam deflection, a special spectacle glass vaulting. The Motorola disclosure does not address solutions of these basic optical problems.