A head-mounted display or helmet-mounted display, both abbreviated HMD, is a display device, generally worn on the head or as part of a helmet. Although they were initially developed for military use, HMDs are now used in commercial aircraft, automobiles and other, mostly professional applications. A primary application for HMDs is to create virtual reality environments for video games and to provide simulation and training. Use of the term “HMD” in this specification is intended to refer to any type of display device that is mounted to a user's head. These include, but not limited to virtual or augmented reality headsets such as Oculus Rift™ or Magic Leap™, helmet mounted displays and eyewear displays such as Google Glass™.
A typical HMD has either one or two small displays with lenses both embedded in a helmet, eyeglasses, visor or other similar device. An HMD may employ multiple displays to increase total resolution and field of view. The display units are generally miniaturized and may include cathode ray tubes (CRTs), liquid crystal displays (LCDs), Liquid crystal on silicon (LCos), or organic light emitting diodes OLEDs.
A small display lens is mounted in the HMD in front of one (monocular HMD) or each eye (binocular HMD) of a user. A binocular HMD has the potential to display a different image to each eye which can be used to show stereoscopic images.
The user's eye must be properly aligned with the HMD to assure optimum optical characteristics, sharpness and focus. Misalignment or helmet shift can cause an inaccurate or distorted picture. Head fit and facial position and other factors make helmet fitting a crucial factor in a user's ability to interface and interact with the system.
Misalignment can cause an inaccurate or distorted picture due to optical aberrations such as spherical aberration, optical coma, astigmatism and field curvature. When a user puts an HMD into a wearable position, the user's eye may not be properly aligned with the HMD providing sub optimal performance of the display and lens system. Misalignment may be caused by errors in pupil or intraocular distance, headset height or vertical offset and pupil distance from the screen. Additionally, distortions introduced by the display-lens system may be significant so as to require correction.
One option for alleviating the affects of a user's head/face/eye misalignment is by using a camera, illumination source and eye tracking system as a calibration method. A camera captures images of a user's eye(s). The images include a glint due to light from an illumination source reflecting from a user's eye directly back to the camera. Various image processing methods for identifying and locating a glint and pupil within captured images of a user's eye are known.
In a typical camera tracking system used for calibration, the user may be asked to fix his or her gaze upon certain points in a display. At each displayed coordinate location, a corresponding gaze direction may be computed.
U.S. Pat. No. 5,481,622 to Gerhardt et al. entitled “Eye Tracking Apparatus and Method Employing Grayscale Threshold Values” teaches a head-mounted eye-tracking system. The user gazes at a cursor placed at a known position in a display screen, and the invention determines the pupil center position. Cameras capture images that include reflections from the user's cornea. The system includes a set of light sources within the user's view of the display screen. The light source produces a glint as seen by the camera and the system determines a user's eye position for calibration.
Camera systems may improve display accuracy. However, a reflection may be distorted from each cornea considering curvature variations based on different relative positions of the camera or light source relative to a user's eye. Also, the user might also be asked to click a mouse button or identify a cursor after gazing at an image. One problem associated with this approach is that it relies heavily on the user's attention and the user may look away then click the mouse button or select the cursor position.
In addition, using a camera and eye tracking system may be computer intensive, as a camera system needs to identify the user's pupil position for each frame. The system is expensive based on the requirement for a camera, illumination source and additional processor power required for the eye tracking system.
Any discussion of the prior art throughout the specification should in no way be considered as an admission that such prior art is widely known or forms part of common general knowledge in the field.