Information access asymmetry, a situation where one party has easier and more continuous access to information compared to another, can be beneficial in military conflicts. The largest bandwidth channel of a human is the visual sensory channel which can receive information at 1250 MB/s, thereby making a visual display a useful way for presenting information and creating information access asymmetry. Traditional displays may be used, but soldiers usually remove their gaze from the outside environment in order to see the information on the display. On the other hand, Head Mounted Displays (HMDs) can relay information to a soldier from a display system embedded in a headset and keep the displayed information in the soldier's field of view (FOV), thereby allowing the soldier to remain fully engaged in the scene and referencing the information to real world objects and events at the same time.
Currently available Augmented Reality (AR) HMDs usually overlay virtual images produced by a micro-display onto the real world via an optical relay system comprised of lens elements and optical combiners. The use of polarization and color filters in typical micro-LCDs, in addition to the use of beam splitters for optical combination, can reduce the light use efficiency of these systems to under 5%. This can result in very high power consumption for producing images bright enough to see in daytime ambient light conditions. Discrete bulk optical components and large power sources can also increase the size and weight of the HMD, thereby hindering its use in scenarios where high mobility is desirable.
In addition, optical relay systems used in commercially available HMDs typically have horizontal and vertical FOVs that are no larger than 40°. In comparison, the near peripheral FOV of the human eye is about 60°. Furthermore, in current HMD architectures, the micro-display image is often magnified such that the image appears at a single virtual focal plane, which may cause vergence-accommodation conflict in binocular systems. A person usually makes two types of responses when seeing scenes with two eyes. The brain of the person converges the eyes so that both eyes are directed at the point of interest (also referred to as vergence). The person also focuses the lens within the eye to sharpen the image on the retina (also referred to as accommodation). When a display produces image appearing at a single virtual focal plane, focus cues—accommodation and blur in the retinal image—typically specify the depth of the display rather than the depths in the depicted scene, while the vergence of the eyes are on the depicted scene, thereby causing conflict between vergence and accommodation. Vergence-accommodation can force the viewer's brain to unnaturally adapt to conflicting cues and increases fusion time of binocular imagery, while decreasing fusion accuracy. This in turn can cause visual fatigue (e.g., asthenopia), making it challenging to have prolonged wearability.