The current invention pertains to advanced light management such as is useful in the processing of and displaying and processing of data related to light. For example, the current state of the art in display technology still falls short of providing image displays (from large screen televisions to miniature worn displays; e.g., image displaying glasses) with true and accurate depth perception, natural (wide) field of view (FOV) and control of overlaid images. Worn displays have proliferated with the miniaturization of optical parts and the widespread need for portable imaging. There are numerous “heads-up” (worn) displays based on the image (from head mounted projectors) passing through a collimator and entering the eye. The collimation (i.e. divergence reduction) process makes the image appear more distal by making the light from any given pixel less divergent. Otherwise, a head mounted projector would normally be too close for the eye to bring to a focus on the retina. The collimation process, in addition to creating the illusion of a larger, more distant display, also moves the apparent point of interest (POI, i.e. what the observer wants to see) beyond the near point so that the image can be resolved on the retina within a reasonable circle of confusion (CC). There are numerous variations on that theme including those in which the image is reflected off of mirrors, deformable mirrors, mirror arrays and beamsplitters in transit to the eye.
This external accommodation for the eye's limited diopters of focusing power for very near isotropic-pixel image sources and the other hardware typically required in directing and managing the beam to the retina all come at a high cost. That cost includes the expense of lenses (typically very precise and often compound lenses), the chromatic aberration and/or edge distortion/peripheral vision and FOV-limiting factors associated with them, the weight of those classical optics, the need for a larger housing to accommodate the collimating and beam managing optics, the often extreme sensitivity to critical alignment (and vibration-induced misalignment), and full or partial obstruction of the natural scene view (what would be seen without a display in the way) by all of these components. These and the electrical power requirements of classical display technologies prevents their use altogether in many applications and makes it difficult in others.
There are other problems with both 2-D and 3-D worn displays that are designed for assisting the viewer (the person viewing the image) with images that are either independent (only the display image is seen) or overlaid over a scene. These include:
Mixed views with conflicting focal distances: Mixing the image to be displayed against a complex landscape (that landscape having a potentially wide disparity in required focusing distances even in a narrow lateral proximity) is complex task. As you look at a near projected image with a far object visible behind it, the eye, having a single lens-accommodation apparatus, attempts to capture both landscape and image using the same focus. When the eye's focus is set on a very near object, an also-near (i.e. high divergence associated with a near isotropic emitter) display image may be in focus. If it is, then it will be out of focus if the viewer later fixates on a far object—delivering a confused view to the user that forces him to give attention selectively to the display to the detriment of the now fuzzy background or vice versa. In other words, conventional display systems can't match the display view's effective plane of focus to that of the instant scene POI (which may be in a wide variety of distances from the single effective focal plane provided by typical displays).
Safety: Soldiers in the Army's advanced military equipment program have confided that they routinely take off their heads-up displays to keep from tripping and falling down even at a slow walk due to severe spatial disorientation. This is because if they focus, for example, on a graphical pointer image (highlighting, for example, the azimuth and elevation of a threat) the ground is then out of focus causing them to be disoriented when they return their focus to the real world. More complex images (much more complex than a pointer or crosshair) create even more substantial problems.
Painful 3-D: The familiar binocular 3-D glasses (whether using red and green lenses, twin polarization, or blanking interval selection) all create a phenomenon associated with, in extended use, disorientation, dizziness and often headaches or nausea. What is happening is that the brain is getting mixed signals. For example, the binocular overlap tells the brain that an object is at an overlap-indicated distance while the lens focus and interocular cant tells another story (one related to the actual image plane). This spatial disconnect forces the brain to process disorienting, conflicting data that both distracts and, even when the brain submits to a singular component (e.g., binocular overlap), degrades the capacity for accurate depth perception and spatial orientation. This can cause the user, in applications where the 3-dimensional (3-D) imaging is for the purpose of responding to a virtual or recreated distant environment, to miss a virtual target he grabs for due to the spatial disconnects forced upon the brain.
Standing or walking 3-D: Current binocular 3-D systems, in addition to the disorientation and discomfort above, don't mix well with a forward scene. There are many applications where it would be ideal for a lightweight, inexpensive binocular-based 3-D system to overlay a 3-D image cleanly into the natural 3-D scene. However, the very nature of twin sets of isotropically emitting points in a display existing at one apparent distance and another set of isotropically emitting points from a substantially different distance has, until now, assured that one of the images would be out of focus. Thus, a viewer trying to walk or run while viewing the displayed would find objects in the natural landscape out of focus.
Thus, to date, the ideal of a pair of ordinary looking, lightweight, power-efficient glasses providing the viewer with an additive displayed image that is unobstructed by projection overhead and is always in focus with the natural scene view has eluded designers.
Enhanced perception: The medical community, despite great advances in 3-D image capture, still lacks an effective method for integrating this 3-D image with the scene image of the patient in real time as the physician's perspective continually moves. This is especially problematic at hand-eye coordination distances where lens muscular tension (a brain-recognized indication of the focal plane of an eye) varies substantially with the near-field POI.
Spatial light modulators (SLM's) are widely used in projection systems. SLM's work in a number of fashions. Heat-based spatial light modulators are optical devices that allow modulation of incident light with a phase pattern or amplitude determined by heat. Digital micro-mirror devices (DMD's) involve a matrix of micro-mirrors placed above a substrate. A voltage is applied between the micro-mirror and electrode. This allows individual adjustment of the light reflection angles of the micro-mirror. Further timing of how long light is reflected at a target can control the brightness. Liquid crystal devices (LCD's) are another form of spatial light modulation that is effective in controlling the light at a pixel level. Various DMD technologies are disclosed in U.S. Pat. Nos. 4,956,619 and. 5,083,857.
The unique components of all patents referenced herein are not foundational to the claims of the current invention but are mentioned here for background.
U.S. Pat. No. 6,819,469 documents another form of SLM. Here Chalcogenide materials, having nonlinear optical properties, are thus useful as nonlinear optical fibers and filters. From that patent (U.S. Pat. No. 6,819,469): “Chalcogenide materials are known to be capable of reversible structural change between crystalline and amorphous state. These materials have highly nonlinear electrical conductance, which is used in many devices. As an example, U.S. Pat. No. 5,757,446, is for a LCD light modulator (display) in which ovonic (chalcogenide) material is used for pixel switching (selection) element, which allows to apply voltage to the pixel located on the given intersection of address lines, instead of traditional switching elements such as a diode or transistor.
Electro-optic (EO) materials change their refractive index (RI) in an electric field enabling yet another form of SLM. A first-order (linear) electro-optic effect, known as the Pockels effect, and a second-order (quadratic) electro-optic effect, known as the Kerr effect, occur in response to the electric field. EO materials have been shown to be effective dynamic lens materials where a matrix of selectively charged areas effect a desired matrix of refractive indexes for light to pass through. For example, a simple convex lens of a desired focal length can be emulated by an EO SLM by making the refractive index (RI) increasingly high the more distal a point on the matrix is from the center. For example, U.S. Pat. No. 4,746,942, describes a wafer of electro-optic material with an array of independently operated surface-mounted electrodes.
Other forms of SLM's include Dynamic Micromirror Displays (DMD's), LCD's and a number of others in this emerging class of imaging devices that are effective for attenuating the direction and/or characteristics of light at an individual pixel level
However, all SLM's, particularly in embodiments where large light vector changes are required in a short space or in a very small area, are inadequate for providing an unimpeded scene view and a scene-overlaid displayed image from miniaturized worn display elements.
U.S. Pat. No. 7,158,317 introduces depth of field enhancing patterns using Fresnel lenses and Zalevsky et al teach binary-phase masks that also sharpen an image when at a depth associated with defocus by conventional lenses. [“All-optical axial super resolving imaging using a low-frequency binary-phase mask”, Zeev Zalevsky(1), Amir Shemer(1), Alexander Zlotnik(1), Eyal Ben Eliezer(2) and Emanuel Marom(2) 3 Apr. 2006/Vol. 14, No. 7/Optics Express]
Gaming glasses provide a 3-D effect as a viewer watches a monitor through them. The gaming glasses that work on the shutter principle provide each eye the image respective of its side of a binocular image. Many do this by blocking, for example, the left eye very briefly and flashing the image on the screen for the right eye. The next cycle blocks the right eye and flashes the image for the left eye. In this manner, each eye sees an image appropriate for a desired distance in a binocular view.
Laser projection is a promising area for traditional screen and worn display applications and allows a view of the scene combined with the display image. However, most involve collimation optics, image projectors (to reproduce a scene that is physically blocked by the assembly) and beam splitters which add layers of error and complexity and may severely limit effective FOV. The collimation optics themselves create physical weight and footprint overhead as well as multiple degrees of freedom potential for error in alignment of individual components. This is further compounded by the interacting misalignments of multiple collimation components required on equipment that is unfortunately destined for the impacts, water droplets, glass fog and filmy slime of hostile environments. Large screen displays are currently not associated with flat screens or efficient light control. The need exists for an power efficient, broadly scalable flat screen display (from handheld screen to billboard) that effectively delivers the vast majority of the light produced to the viewer as opposed to the surrounding area outside the field of view (FOV) of the display and, optionally, in 2-D or 3-D.
Currently available HWD's are expensive, sensitive to shock, misalignment of optical components, heat and moisture and do not provide the broad FOV, good peripheral vision, easily viewed, high-resolution scene view (integrated with the display view) or the depth-enabled pixel display necessary for natural spatial perceptions. If this were accomplished, there would also be multiple applications in vision enhancement and correction but the above limitations have not previously been overcome.
U.S. Pat. No. 6,181,367 discloses an ingenious device that casts raster-like rows of light along the surface of a transparent plate using total internal reflection (TIR) to keep the multi-diode-delivered light in the plate. This uncollimated light (TIR reflections and an intentional pass through a grating inject numerous light vectors) escapes the TIR (via charge-induced frustrated TIR or FTIR) when a surface area's refractive index (RI) is modified. This escaping diffuse light is redirected by a hologram at each point of activity to direct the light producing an emission viewed as a pixel. However, even this use of reflectivity, TIR mediated by FTIR, (where control of reflectivity is only used as an on/off switch for transmission vs. reflection, affects the quality of the light and ultimately image resolution.
Also, degree of illumination is controlled by attenuation of power to the light source which largely precludes or hampers the potential activation of multiple pixels in the same raster line (having different desired intensities) at once thus reducing potential refresh speed. Also, a binary electro-optic (EO) choice of off or on determines the state of the pixel but offers no favorably distributive effect as would be effected with an intensity based on duration of activation or amplitude of the activation signal to the EO modulator. Also, the switching of the laser is a key part of choosing which area is to be activated which can further limit top refresh speed by limiting or eliminating concurrent activations of multiple pixels in the same row all of which normally have a different intensity. This can become even more awkward when multi-color imaging is desired and, in a described embodiment, three or more diodes are required for each row of pixels.
The sequential switching of the elements in the row of light emitting diodes as a means of providing light to numerous pixel areas as needed also is substantially slower (in the switching process) to respond to a needed activation than a device using a constant beam which requires no waiting for the right LED sequence and the correct brightness to “come around again”.
Also, the dedication of a laser diode for each row in a raster-like display both creates expense, fabrication time and registration issues as well as exacerbating wiring and timing issues while becoming even more complex when three diodes or other color controls are required for each row. Also, the switching of these as disclosed can also create issues “deriving from an NTSC video signal the signals applied to the laser diodes”.
The confining of each pixel's directive control to one or two holographic elements also reduces the flexibility and increased precision potential and more uniform brightness control that can be effected by cooperating elements that produce a single pixel with broadly variable controlled properties.
The holograph-using embodiments taught in the referenced patent, particularly in the sizes responsive to the compactness required for high resolution (small pixels) and with the quality of phase-distorted TIR light reflecting off of a surface, can't redirect light at as sharp an angle with as clean a beam as the current invention.
Also, the purpose of the holographic element(s) for each pixel is to make the light appear to diverge isotropically at a manufacturer-set divergence relative to a one-time chosen display difference. Thus the fixed holographic elements of the device cannot place the objects at a context sensitive or currently desired image location (depth) because the single depth of the projected image is fixed at the factory. Thus, the image's focal plane can conflict drastically with the focal distance of a nearby area of the scene making it difficult to focus on both at the same time (e.g., when a displayed graphic or text is placed over a scene POI to identify it).
The disclosures of the referenced invention also fail to create any embodiments supportive of true 3-D imaging though it does provide the means to provide one of the multiple brain cues of depth perception, binocular overlap (as do red and green glasses). The disclosures indicate that “the virtual image viewed through a single eye will appear to be at an indeterminate distance” but “the actual appearance of depth will depend almost entirely on stereoscopic cues . . . ”. The broadly understood spatial disconnect between binocular overlap and the brain's other compelling depth perception cues is responsible for the headaches, disorientation and sometimes nausea associated with using a 3-D mechanism limited to binocular overlap. Worse, the user does not have a true sense of depth of the displayed images (having, in fact, a misleading one) that could have been used to exploit natural hand-eye coordination in applications where projected images are related to POI'S in the scene view such as a surgeon viewing a positionally registered image of a depth-encoded MRI superimposed over a patient enabling precise and natural hand eye coordination and sense of depth as the surgeon cuts into the patient while viewing a virtual POI inside the body (e.g., an artery to be avoided or a tumor to be precision excised).
Finally, the holographic element(s) can severely react to (negatively) bright ambient light sources and, even in controlled light, substantially distort the scene view (diffracting scene light that attempts to pass through) in addition to the deleterious scene distorting effects of potential artifacts resulting from the FTIR selection of an entire row (distorting the RI of that row for the scene view) at a time (with the chosen “column” diode identifying the pixel). This and other distorting elements in the referenced patent preclude its effective use in coordination with a scene view.
Despite a wealth of imaging technologies, both emerging and established, there is an unmet need for a worn display technology that enables any of the multiple objects of the current invention summarized below under “Summary of The Invention”. There is also an unmet need for the control of light, applicable to both imaging and data control applications (e.g. light transmission switching and selective attenuation of signal channels) without the above limitations.