A virtual reality (VR) system environment includes a VR headset configured to present content to a user via an electronic display and a VR console configured to generate content for presentation to the user and to provide the generated content to the VR headset for presentation. To improve user interaction with presented content, the VR console modifies or generates content based on a location where the user is looking, which is determined by tracking the user's eye. Accordingly, the VR headset illuminates a surface of the user's eye with a coherent light source mounted to (e.g., inside) the VR headset, such as laser.
An imaging device included in the VR headset captures light reflected by the surface of the user's eye surface. In some embodiments, light reflected from the surface of the user's eye may be polarized by a reflective light polarizer or refracted by a lens assembly that focuses or otherwise modifies light reflected from the eye surface before an imaging sensor in the imaging device receives the light reflected from the eye surface. As the surface of the eye is rough, light captured by the imaging sensor of the imaging device may be a speckle or diffraction pattern formed from a combination of light reflected from multiple portions of the surface of the user's eye.
In some embodiments, the VR headset performs one or more image processing operations to improve the contrast of an image generated from the light captured by the imaging device. Example image processing operations include sensor corrections (e.g., black-level adjustment, lens distortion correction, gamma correction) and illumination level corrections (e.g., white balance correction). The VR headset may also perform histogram equalization or any other technique to increase the contrast of the image from the captured light. In some embodiments, the VR headset may perform illumination level corrections to reduce noise caused by variable illumination of the surface of the user's eye by the electronic display or by an external light source. Alternatively or additionally, the VR console performs one or more image processing operations on images obtained by the imaging device in the VR headset and communicated from the VR headset to the VR console.
The VR headset sends eye tracking data comprising an image captured by the imaging device from the captured light or data derived from the captured image to the VR console. For example, the eye tracking data includes a version of the captured image modified through one or more image processing operations. As another example, the eye tracking data includes an image captured by image capture device and data describing lighting of the surface of the user's eye by sources other than the coherent light source. Alternatively, the VR headset includes components to track the eye of the user, so the VR headset does not send the eye tracking data to the VR console.
In some embodiments, the VR console verifies that the received eye tracking data corresponds to a valid measurement usable to accurately determine eye position. For example, the VR console determines a representative figure of merit of the eye tracking data and compares the representative figure of merit to a validity threshold. If the representative figure of merit is less than the validity threshold, the VR console determines the received eye tracking data is invalid. However, if the representative figure of merit equals or exceeds the validity threshold, the VR console verifies the received eye tracking data corresponds to a valid measurement. The representative figure of merit may be a sum, an average, a median, a range, a standard deviation, or other quantification of pixel values in image data (e.g., pixel gray levels, luminance values, relative pixel intensities). The representative figure of merit may be determined from figures of merit of all pixels in an image included in the received eye tracking data or estimated from a subset of pixels in the image included in the received eye tracking data by sampling techniques. For example, when a user blinks, a sum of the pixel intensity values decreases, so the VR console determines that the received eye tracking data is invalid in response to determining a sum of relative pixel intensities is less than the validity threshold. In various embodiments, the validity threshold is specified during manufacture of the VR headset or determined during calibration of the VR headset. When determining a figure of merit based on relative pixel intensities, indices of various pixels for which relative intensity is determined affects determination of the figure of merit in various embodiments. To account for varying external illumination conditions when verifying the validity of the received eye tracking data, the validity threshold may be dynamically determined based on a trailing average of representative figures of merit of previously received eye tracking data that was captured within a threshold time of the received eye tracking data or a trailing average of representative figures of merit of previously received eye tracking data that was captured within the threshold time of the received eye tracking data and was determined to be valid.
VR console accesses calibration data for determining an eye position from the received eye tracking data. The calibration data may include a subpixel distance indicating a distance on the surface of the user's eye corresponding to a subpixel of the image sensor of the image capture device. If a subpixel of the image sensor corresponds to a rectangular (or elliptical) area on surface of the user's eye, the calibration data may include two subpixel distances corresponding to orthogonal directions along the surface of the user's eye (e.g., a length and a width of an area on the surface of the user's eye). The subpixel distance may be determined in part from a distance between the image sensor and the surface of the user's eye. The distance between the image sensor and the surface of the user's eye may be determined during a calibration period or dynamically determined via a range finding device included in the VR headset (e.g., a laser rangefinder, sonar). In various embodiments, the VR headset periodically determines the distance between the image sensor and the surface of the user's eye (e.g., once per second), determines the distance between the image sensor and the surface of the user's eye in response to the VR headset powering on, or the distance between the image sensor and the surface of the user's eye in response to receiving measurement signals from a position sensor included in the VR headset indicating an adjustment of the VR headset on the user's head. The subpixel distance may be determined by multiplying an angle, in radians, corresponding to a pixel, which is a property of the image capture device, by the distance between the image sensor and the surface of the user's eye. Using the subpixel distance, the VR console determines a change in eye position from a subpixel shift between two images of the surface of the user's eye from received eye tracking data.
Alternatively or additionally, the VR console accesses calibration data from a table (e.g., a lookup table) comprising reference images captured during a calibration period. The reference images correspond to known eye positions, particular eye gaze points on the electronic display of the VR headset, or both. During an example calibration period, the VR headset instructs the user to gaze at a series of icons on the electronic display and captures a reference image when the user gazes at each icon. The reference image corresponds to the eye gaze point of the icon at the time of capture, and the VR console infers an eye position corresponding to the reference image from a model of the eye and other eye tracking systems included in the VR headset. The VR console may store the reference images or may store a condensed representation of the reference image to facilitate matching with subsequent images from received eye tracking data. For example, the VR console generates a fingerprint for each reference image, extracts features (e.g., blobs, edges, ridges, corners) from each reference image, or both. An extracted feature may be stored in association with information identifying the feature's position on the surface of the user's eye, values of the feature's constituent pixels, or both. Using the reference images (or condensed representations thereof), the VR console may determine an eye position with reference to a single image from the received eye tracking data.
Using the accessed calibration data, the VR console determines an eye position from the received eye tracking data. In some embodiments, the VR console obtains a reference image associated with a reference eye position. For example, the image capture device captures the reference image at the same time another eye tracking system (e.g., a slow eye tracking system) independently determines the reference eye position. The VR console determines an updated eye position by determining a subpixel shift between an updated image and the reference image, determining an eye shift distance from the subpixel shift, and combining the reference eye position with the eye shift distance. To determine the subpixel shift, the VR console may use any motion tracking or optical flow technique (e.g., phase correlation, block matching, differential optical flow methods). The VR console determines the eye shift distance by multiplying the determined subpixel shift by the subpixel distance value from the accessed calibration data. The subpixel shift may be two-dimensional (e.g., 5 subpixels up, 3 subpixels left), so the eye shift distance may be two dimensional as well (e.g., 50 micrometers up, 30 micrometers left). Using the eye shift distance, the VR console determines the updated eye position by shifting the reference eye position by the eye shift distance. When determining the updated eye position, the VR console may: update the eye's orientation and location, determine updated axes of eye rotation, determine a new gaze location on the electronic display, or a combination thereof.
Alternatively or additionally, the VR console determines the eye position by matching an updated image with a reference image from accessed calibration data. The VR console compares the image from the image capture device to various reference images to determine a matching reference image. The VR console may determine the matching reference image by scoring reference images based on a degree of matching the updated image and selecting a reference image with the highest score. Alternatively or additionally, the reference images are compared to the updated image and scored until a reference image having a score exceeding a threshold value is identified. If the image capture device captures an image corresponding to 1 square millimeter of the eye, the calibration data includes about 500 images corresponding to different portions of the surface of the user's eye capable of being imaged over the eye's full range of motion. In some embodiments, the VR console generates a condensed representation of the updated image (e.g., a fingerprint, a set of features), and compares the condensed representation of the updated image to condensed representations of the reference images to reduce time and computation resources for determining the matching reference image. When the VR console determines the matching reference image, the VR console determines the updated position by adjusting the reference position associated with the matching reference image by a subpixel shift between the updated image and the reference image.
The VR console determines content for presentation by the VR headset based on the determined eye position. For example, the VR console uses an estimated gaze point included in the determined eye position as an input to a virtual world. Based on the gaze point, the VR console may select content for presentation to the user (e.g., selects a virtual anime creature corresponding to the gaze point for deployment against another virtual anime creature in a virtual gladiatorial contest, navigates a virtual menu, selects a type of sports ball to play in the virtual world, or selects a notorious sports ball player to join a fantasy sports ball team).
The figures depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.