Motion-based control systems may be used to control computers and more particularly, motion-based control systems may be desirable for use with video games. Specifically, the interactive nature of control based on motion of a movable object, such as for example, a user's head may make the video gaming experience more involved and engrossing because the simulation of real events may be made more accurate. For example, in a video game that may be controlled via motion, a user may move their head to different positions in order to control a view of a rendered scene in the video game. Since the view of the rendered scene is linked to the user's head movements the video game control may feel more intuitive and the authenticity of the simulation may be improved.
In one example configuration of a motion-based control system, a user may view a rendered scene on a display screen and may control aspects of the rendered scene (e.g. change a view of the rendered scene) by moving their head. In such a configuration, the display screen may be fixed whereas the user's head may rotate and translate in various planes relative to the display screen. Further, due to the relationship between the fixed display screen and the user's head, the control accuracy of the user with regard to control aspects of the rendered scene may be limited by the user's line of sight of the display screen. In other words, when the user's head is rotated away from the screen such that the user does not maintain a line of sight with the display screen, the user may be unable to accurately control the view of the rendered scene. Thus, in order for the user to maintain accurate control of the rendered scene, the movements of the user's head may be scaled relative to movements of the rendered scene in order for the user to maintain a line of sight with the display screen. In other words, the magnitude of the user's actual head movements may be amplified in order to produce larger virtual movements of the virtual perspective on the display screen. In one particular example, a user may rotate their head 10° to the left along the yaw axis and the motion-based control system may be configured to scale the actual rotation so that the virtual perspective in the rendered scene may rotate 90° to the left along the yaw axis. Accordingly, in this configuration a user may control an object or virtual perspective through a full range of motion within a rendered scene without losing a line of sight with the display screen.
However, scaling the user's actual movements to change the view of the rendered scene may create relationships where a sequence of actual movements (e.g. rotations and translations) may not produce unique results in the virtual reality. For example, under some conditions two different sequences of movements by a user may produce the same result in the virtual reality. As another example, under some conditions the same sequence of movements by a user may produce different results in the virtual reality. In other words, due to the scaling of actual motion in the real world to a produce change of view (or perspective) in the virtual reality, actual space may no longer correspond one to one with virtual space (i.e. actual space and virtual space are not topologically equivalent). Further, scaling of actual movements to generate virtual movements, may cause the virtual space to be non-Euclidean, that is to say, parallel lines in the virtual space may non remain at constant distance from each and thus virtual space may not be planar, but rather may be spherical, for example.
Accordingly, in some cases, when a user performs a sequence of movements to move a virtual perspective to a desired position in the virtual reality the sequence of movements may result in the virtual perspective being position in an undesired position in the virtual reality. This phenomenon may be perceived as disorienting to the user. Moreover, the user may have to perform many sequences of movements to position the virtual perspective in a desired position in the virtual reality, which may be perceived by the user as overly tedious and may result in the user becoming frustrated.
In order to make control of the virtual perspective less disorienting and more predictable, the motion-based control system may be modified so that translation may always be performed relative to the scaled orientation of the user's head. In such a configuration, translation may be performed according to the frame of reference of the virtual perspective in virtual reality as opposed to a predefined or static frame of reference that is independent of the orientation of the virtual perspective.
In one particular example, in a flight simulation video game, a virtual perspective may have a default or starting view that is centered in the cockpit of an airplane and directed out of the front cockpit window. A user may generate a sequence of movement including rotation and/or translation herein referred to as a motion control path. The motion control path may include a yaw rotation to the left 10° and a translation toward the display screen. The motion-based control system may be configured to scale the rotation so that the virtual perspective rotates 90° so that the view is directed out the left-side cockpit window. Further, since the motion-based control system may be configured to translate the virtual perspective relative to the rotated frame of reference, the translation may cause the virtual perspective to be zoomed in on the view out of left side cockpit window. Whereas, had the motion-based control system been configured such that translation is performed independent of orientation of the user's head, the virtual perspective would translate to the left so that the view is still directed out the left side cockpit window but positioned closer to the front of the cockpit and not zoomed in. By performing translation relative to the orientation of the user's head, control of the virtual perspective may be perceived as more intuitive and natural.
However, the inventors herein have recognized that configuring the motion-based control system so that translation is always performed relative to orientation may still produce some control characteristics that are counterintuitive and disorienting, namely, translational and/or rotational skewing of the virtual perspective may occur under some conditions. This phenomenon may be most clearly perceived and particularly undesirable to a user when the user attempts to return the virtual perspective to a neutral reference position. In particular, with reference to the above described example where rotation and/or translation is scaled, in order for the user to return the virtual perspective from a second view away from the neutral reference position to the first view at the neutral reference position, the user must invert the order of the steps of the motion control path (i.e. the series of changes in orientation and/or translation) otherwise the virtual perspective may be moved to a view other than the view at the neutral reference position.
Continuing with the flight simulator example, in the above described system, in order for the user to return the virtual perspective to the origin position the user must generate a motion control path which is the inverse of the first motion control path, i.e. the user must perform a translation of their head away from the display screen and then a yaw rotation of 10° to the right. If the user simply moves their head quickly back to the origin position in a manner that may feel natural and intuitive, the virtual perspective may be placed in a position that is skewed or offset of the neutral reference position since the rotation and translation motions may be out of sequence. Accordingly, in the above described configuration of the motion-based control system, a user may have difficulty aligning the virtual perspective with the neutral reference position unless the user performs a specific series of head movements. This aspect of the motion-based control system may be negatively perceived by the user since unintuitive sequences of movements may be required to implement precise control of the virtual perspective.