With the advancement in computer technologies, particularly in the field of virtual reality and augmented reality, interest in the development of new user input mechanisms has increased. One area of research focuses on the use of the user's visual field for controlling a computer system. By knowing the position of the user's gazing point, human-computer interactions can be enhanced and provide a new control interface for applications ranging from virtual environments, gaming controls, augmented reality and neuro/psychological research. Using such a computer interface system, a user could control a computer or a portable device through the movement of their eyes without using any input equipment, such as a keyboard or a mouse. This control method could also effectively assist users with movement disabilities to control a computer or a portable device.
Gazing point detection and tracking technology has a long history of development with diverse methodologies proposed. For example, remote eye tracking is developed and used on smart televisions (TV) for selecting channels, and gazing point tracking is also developed for identifying a customer's preferences and interests for optimizing the customer experience in a shop. With the advancement in computational speed and a decrease in hardware dimensions, gazing point detection and tracking technology can now be employed in more sophisticated applications with higher detection accuracy and with less constraint on the technological use. Such human-computer interactions could enable new ways to control portable devices, such as smartphones and tablets, or home appliances including computers and TVs through the movement of a user's eyes.
Normally, gazing point trackers are portable platforms with at least one internal camera and one external camera, for monitoring the user's eye and the user's visual field respectively. The internal camera points towards the user's eyes for detecting the pupil, iris and/or eye glints (from corneal reflections), while the external camera is used to capture the user's visual field. In certain applications, the gazing point tracker can be integrated into a wearable device, such as head-mounted frame, eyewear article, or other headwear.
Conventional gazing point tracking method uses corneal reflections, as shown in FIG. 12, a camera directed towards the user captures images of the eye, wherein each image includes the corneal reflection of the light source (glint) and the position of the pupil center. The gazing point can be estimated according to the relative movement between the pupil center and glint positions. Such conventional methods require precise information on the location of the light source and camera. External ambient light from the environment may also affect the detection of the glint point. Furthermore, eye parameters, such as the pupil center, the center of curvature of cornea and the radius of corneal, are needed for gazing point detection. In view of the complexity of the calculations, there is a need for a method to simplify the calibration process and detection process without scarifying the precision and robustness of the system.
Although conventional methods are able to provide eye gaze detection and tracking, the consistency and precision of the detection methods are constantly challenged in view of the wide variations in eye parameters and other biological aspects of users, such as eye radius, pupillary distance and pupil size. Moreover, existing detection methods may require a controlled environment and/or detailed geometric measurements of parameters such as the user's head pose information, eye glints information, light sources and cameras locations, which increase complexity in the implementation and may create inconvenience to the user. The use of glints may also cause error to the measurement when there are other external light sources. Some other detection methods may highly rely on a display screen and the user may only fixate at any point limited by the dimension or relative position of the display screen. In addition, if the user's gazing point is outside the display screen, the accuracy of gaze detection may be sacrificed.
In light of the issues raised above, an improved device for gazing point detection in 3D space and method of use thereof that solves one or more of the problems raised above is needed.