Binocular 3D vision involves simulating human vision principles, observing an object from two or more points, acquiring images at different view angles, and calculating the deviation between pixels by triangulation measurement principle according to the matching relationship of the pixels between the images to obtain the 3D information of the object.
As virtual reality technologies develop, consumers constantly raise new demands for human-machine interfaces. The coordinate conversion of the real space and the virtual gaming space may be implemented by using binocular 3D vision technology to perform positioning. At present, usually a binocular camera is used to perform the positioning in a virtual reality system. The binocular camera has a fixed Field of View (FOV). The space that may be used to implement positioning is fixed. Once the user moves out of the field of vision coverage range of the binocular camera, the binocular camera cannot implement positioning, which limits the area of free movement of the user when using a virtual reality system.
Current solutions of extending binocular camera positioning range generally increases the space that may implement the positioning by increasing the FOV of the binocular camera, but the increasing FOV of the binocular camera causes problems such as larger imaging deformation, reduction of resolution and loss of positioning precision.