CPC H04N 21/47217 (2013.01) [G06F 3/013 (2013.01); G06T 19/006 (2013.01); G06V 40/193 (2022.01); H04N 21/4333 (2013.01)] | 10 Claims |
1. A method for controlling video playing, comprising:
in response to at least one video being playing or paused on a display screen, acquiring a current face image of a user in front of the display screen;
parsing the current face image by using an augmented reality-based gaze tracking method to determine a current visual focus of the user on the display screen; and
controlling, according to the current visual focus, the at least one video to continue playing or be paused on the display screen;
wherein controlling, according to the current visual focus, the at least one video to continue playing or be paused on the display screen comprises:
in response to detecting that the user blinks continuously for N times within a preset time period, determining a video where the current visual focus is located according to the current visual focus, wherein N is a natural number greater than 1; and
in response to the video where the current visual focus is located being playing on the display screen, controlling the video where the current visual focus is located to be paused on the display screen; or in response to the video where the current visual focus is located being paused on the display screen, controlling the video where the current visual focus is located to continue playing on the display screen;
wherein parsing the current face image by using the augmented reality-based gaze tracking method to determine the current visual focus of the user on the display screen comprises:
parsing the current face image by using the augmented reality-based gaze tracking method to determine an intersection of a left-eye gaze of the user and the display screen and an intersection of a right-eye gaze of the user and the display screen; and
determining the current visual focus according to the intersection of the left-eye gaze and the display screen and the intersection of the right-eye gaze and the display screen;
wherein parsing the current face image by using the augmented reality-based gaze tracking method to determine the intersection of the left-eye gaze of the user and the display screen and the intersection of the right-eye gaze of the user and the display screen comprises:
determining a spatial position of the left-eye gaze and a spatial position of the right-eye gaze through a Left Transform model and a Right Transform model corresponding to the user; and
determining the intersection of the left-eye gaze and the display screen and the intersection of the right-eye gaze and the display screen according to the spatial position of the left-eye gaze, the spatial position of the right-eye gaze and a pre-determined spatial position of the display screen; and
wherein determining the spatial position of the left-eye gaze and the spatial position of the right-eye gaze through the Left Transform model and the Right Transform model corresponding to the user comprises:
adding, in a three-dimensional space, a right-eye node and a left-eye node of the user, a left virtual anchor point corresponding to the left-eye node and a right virtual anchor point corresponding to the right-eye node; and
determining the spatial position of the left-eye gaze according to a spatial position of the left-eye node and a spatial position of the left virtual anchor point; and determining the spatial position of the right-eye gaze according to a spatial position of the right-eye node and a spatial position of the right virtual anchor point.
|