Eye gaze tracking has use in a wide range of applications, including medical research, automobile technology, computer entertainment and video game programs, control input devices, augmented reality glasses, and more.
Some known eye gaze tracking techniques involve illuminating the eyes by emitting light from one or more light sources and detecting reflections of the emitted light off of the eyes with a sensor. Typically, this is accomplished using invisible light sources in the infrared range and capturing image data (e.g., images or video) of the illuminated eyes with an infrared sensitive camera. Image processing algorithms are then used to analyze the image data to determine eye gaze direction.
Generally, eye tracking image analysis takes advantage of characteristics distinctive to how light is reflected off of the eyes to determine eye gaze direction from the image. For example, the image may be analyzed to identify eye location based on corneal reflections in the image data, and the image may be further analyzed to determine gaze direction based on a relative location of the pupils in the image.
Two common gaze tracking techniques for determining eye gaze direction based on pupil location are known as Bright Pupil tracking and Dark Pupil tracking. Bright Pupil tracking involves illumination of the eyes with a light source that is substantially in line with the optical axis of the camera, causing the emitted light to be reflected off of the retina and back to the camera through the pupil. The pupil presents in the image as an identifiable bright spot at the location of the pupil, similar to the red eye effect which occurs in images during conventional flash photography. Dark Pupil tracking involves illumination with a light source that is substantially off line from the optical axis of the camera, causing light directed through the pupil to be reflected away from the optical axis of the camera, resulting in an identifiable dark spot in the image at the location of the pupil.
There are other known eye tracking techniques. For example, some eye tracking systems forgo the infrared light source and rely on environmental light to provide the reflections used to track the eye. Other more invasive techniques exist, such as techniques which rely on specialized contact lenses, which may also work in conjunction with a sensor such as an infrared camera.
With many of these eye tracking techniques, optimized performance of the tracking system is dependent upon accurate calibration of a variety of geometric parameters, particularly with reflection based techniques. Relevant calibration parameters may include both user eye parameters, such as iris size, eye curvature, pupil depth relative to the cornea, interpupillary distance (IPD), and iris texture, as well as geometric parameters of the system, such as relative light source location, sensor location, and a size and location of a display screen used in conjunction with the eye tracking system.
One way to calibrate eye tracking to a system's geometric parameters is to utilize a fixed geometric relationship between a display screen, a light source, and its sensor. For example, a display device may be provided with all of these components fixed in a common casing, in which case the relative locations of these components and the dimensions of the display screen would be known based on the specifications to which it is built. However, in many situations this is not an attractive solution because it restricts the ability of the eye tracking device (e.g., light source and/or sensor) to be provided independently from the display device, preventing such a tracking system from being used with preexisting displays and minimizing upgradability of the tracking device or the display independently from one another. Furthermore, it minimizes flexibility in how the system is set up and ties the tracking device to a particular display.
Another potential way to calibrate an eye tracking system is to have the user (i.e., end-user) manually input the calibration parameters after setting up the system's components. However, this would be a time consuming and unreliable process that would significantly detract from the user experience. Furthermore, the system would have to be recalibrated anytime the underlying calibration parameters were changed, e.g. if a component of the system were moved, further detracting from the user experience.
It is within this context that aspects of the present disclosure arise.