Visual Field Testing
The gold standard in visual field analysis has been marketed for some time by the assignee herein under the trademark Humphrey® Field Analyzer (HFA). The HFA projects a light stimulus on an aspheric bowl (see for example U.S. Pat. No. 5,323,194). The HFA test image is intuitively quite simple. The HFA presents stimulus targets of various sizes and intensities on a background of fixed, uniform intensity, and determines whether the subject perceived the stimuli. For example, one HFA stimulus is a white circle of diameter 0.43 degrees presented against a background with a brightness of 31.5 apostilbs for a duration of 200 ms. The stimulus brightness is one of 52 precisely defined intensities and it is presented at a location relative to a fixed position. Nominally, the fixed position is defined relative to a fixation target, on which the subject fixates their gaze. The stimulus may be repeated a different location across the field of view. The full field of view included by HFA is as large as +/−90 degrees in the horizontal axis, while the most diagnostically relevant regions sufficient for most care decisions are within the central +/−30 degrees. The subject informs the device if the stimulus is perceived by pressing a button. The gaze of the patient may be monitored and analyzed throughout the test using various methods including but not limited to corneal reflexes and images (see for example U.S. Pat. Nos. 5,220,361 and 8,684,529 hereby incorporated by reference)
In order to visualize the stimulus on the bowl, the refractive error of the subject should be corrected to allow focus on the surface of the bowl, particularly when performing test in the central 30 degree field. This is done so that the image of the stimulus projected onto the retina by the optics of the eye is functionally equivalent, primarily in terms of intensity and size, between an emmetropic individual who requires no refractive correction for clear focus, and myopes, hyperopes, and those with astigmatism who require refractive correction to focus clearly. This allows the data to be clearly placed in context and allows the clinician to make calibrated treatment decisions, understanding how a subject's vision deficits will affect daily life over the course of their probable lifetime. The HFA includes a holder in which to place a trial lens matching the patients refractive correction. Two slots on the holder allow manual addition of spherical lens and a cylindrical lens. A potential upgrade to HFA adds a lens (e.g. an Alvarez lens) with continuously adjustable optical power to correct refractive errors from approximately −10 to +10 Diopters by means of an electrically controllable moving part (see for example U.S. Pat. No. 8,668,338). The Alvarez lens does not however manage cylindrical power, so operators are recommended to use the trial lens setup with patients who have significant astigmatism. The operator typically records the lens power (and orientation in the case of astigmatism) to aid in the setup for the next exam. When testing outside the central 30 degree field the operator should remove any trial lens to allow an unobstructed view.
The bowl of the HFA necessitates that the perimeter have a large volume. It is desirable to reduce the office footprint of a desktop device in clinical practice. A large device generally requires that the subject move to the device, and conform their posture to the fixed position of the immobile device. The patients who require visual testing are most frequently elderly, and are frequently afflicted with comorbidities. Unnecessary movements around the clinic are both physically challenging for the patient and time consuming for the medical practice. Visual field testing may take several minutes. Attempting to maintain a precisely defined posture for an extended time becomes difficult or impossible for some subjects bent away from the average by age or disease. The effort involved to move between multiple stations within a practice, or maintain a painful posture requires concentration which can degrade testing performance. The HFA is generally a single purpose device. When a patient requires other tests, he she moves to a different location to receive those tests. Visual field tests may be performed binocularly or monocularly. In the more frequent monocular test, the unused eye is typical covered by an eye patch to prevent stimulus.
Several alternative technologies for visual field analysis have been developed. One such technology also marketed by the assignee herein under the trademarks Humphrey FDT® perimeter and Humphrey Matrix® perimeter utilizes a frequency doubling illusion target stimulus created by a video monitor, while a lens near the eye magnifies the display to cover a large portion of the diagnostically relevant field of view. Several groups have proposed head mounted perimeters. (see, for example U.S. Pat. Nos. 5,737,060, 5,864,384, and 5,880,812). To date no head mounted device has achieved clinical acceptance. It is desirable that any head mounted device be light enough that it may be comfortably worn by an elderly patient for an extended period of time.
Light Field Displays
As described by Ren Ng, the “light field” is a concept that includes both the position and direction of light propagating in space (see for example U.S. Pat. No. 7,936,392). The idea is familiar from the ray representation of light. We know that light energy is conserved as it passes along straight line paths through space. The light energy can be represented in a 4 dimensional space L(u,v,s,t) with an intensity value at each of (u,v) positions within a plane, and at angular rotations (s,t) about each of those axes. The concept is used extensively in computer graphics simulations. With the information from a light field, the rays can be propagated to destinations in other planes. The process of computing the light intensity at another plane and presenting it as if it were imaged on a virtual film is also called reconstruction. The methods described by U.S. Pat. No. 7,936,392 B2, as well as the doctoral thesis by the same author (R. Ng, “Digital light field photography” 2006) are exemplary descriptions of light field sensor technology, the mathematics for propagation of the light fields, and the practice of image reconstruction techniques using light fields, both of which are hereby incorporated by reference. Within computer graphics simulations it is also common to accurately render ray paths of light through refracting surfaces based upon a model of the space through which rays travel. Upon interacting with a refracting surface the angle of a ray of light is changed according to Snell's Law; after changing direction the ray again propagates in a straight line towards the destination plane.
A light field sensor for use in a digital focus camera is achieved by placing a sensor array 101 at or near the back focal plane of a microlens array (lenticular array) 102 as illustrated in FIG. 1. This light field sensor is placed in a supporting assembly containing other optical components such as a main lens 103 to shape and constrain the light from the subject 104 to best fit the light field sensor geometry. In this way a ray is constrained in position by the individual lens in the array (lenslet) through which it passed, and in angle by the specific sensor pixel it is incident upon behind the lenticular array. Light field sensors may be created by other means known currently or by other methods which are likely to be devised in the future. One such alternative light field sensor may use an array of pinholes instead of a lenticular array. Another alternative may place the sensor array at a distance significantly different from the back focal plane of the lenticular array (Lumsdaine et al. “The Focused Plenoptic Camera”, ICCP April 2009). Such variations may achieve advantages in angular or spatial resolution given a particular sensor or lenticular array spacing. Ren Ng describes properties of generalized light field sensors in his dissertation work which extend beyond the format of the simple lenticular array placed in front of a sensor array. It is to be understood that all such representations are included if we speak of a lens array as one such representation of a light field sensor.
A light field display incorporates similar concepts, however rather than detecting the light rays, the display creates them. Douglas Lanman describes how a virtual image of a wide field scene may be created by a light field display consisting of a high pixel density organic light emitting diode (OLED) array, and a lenslet array (Douglas Lanman et al. “Near-Eye Light Field Displays”, in ACM SIGGRAPH 2013 Emerging Technologies, July 2013). He makes application of this technology to create a very compact, lightweight head mounted display, especially for the representation of 3D environments for entertainment and other purposes. The OLED display is located at the back focal plane of the lenslet array. Approximately underneath each lenslet, a small portion of a larger scene (partial image) is rendered by the OLED array. When the array is placed near the human eye, the pupil of the eye selects a small subset of the rays emitted by the display. The scene observed by the human eye though each lenslet makes up a small portion of a larger scene composed of the scenes transmitted through all of the lenslets. When the eye is aligned correctly relative to the display, and the images rendered correctly on the OLED display, a consistent widefield image is composed from the partial images.
Each lenslet has a small aperture compared to the eye, and therefore it's scene has a large depth of field. That is, the partial image from each lenslet is relatively insensitive to focus errors. The scenes observed by the human eye through neighboring lenslets have a large degree of overlap. To the extent that corresponding pixels in neighboring scenes overlap at the retina, the scene appears to be in focus. That is, the same scene is projected on the retina, but from different angles, corresponding to different locations within the pupil. A focus error by the eye causes a lateral shift of images on the retina, when the light travels off center through the pupil, thus blurring the superposition of partial images. By changing the relationship between the overlapping scenes digitally, a computer rendering can simulate a change in ‘focus.’
The scene can be described as the super position of views of a scene from different angles, or using the notation familiar to light fields in general, the light energy can be represented in a 4 dimensional space L(u,v,s,t) with an intensity value at each of (u,v) positions within a plane, and at angular rotations (s,t) about each of those axes. In a display the intensity value is set for each coordinate by turning on the associated source display pixel at the appropriate intensity. After propagating the light field to the image sensing retina of the eye, we consider a lateral retina position (u,v); illuminated from angle (i.e. pupil location) coordinates (s,t). The summation of all angular channels at a position gives the integrated intensity at a position in the retinal plane Ir(u,v).