(1) Field of the Invention
The present invention relates to a system, method and computer program product operating in a virtual environment (VE). More specifically, the present invention relates to calculating a non-linear mapping of a relative position of a real-actuator and a real-viewpoint in a real-world environment to a relative position to a virtual-actuator and the virtual-viewpoint in the virtual environment based on a calculated scale-factor. Further, the present invention relates to changing a view orientation in a virtual environment independent of the physical orientation of a user input. Additionally, the present invention relates to allowing a user to select individual or multiple objects in a virtual environment by accepting input from a three-dimensional input device, constraining the input to two degrees of freedom, and applying constrained, two-dimensional selection techniques to three-dimensional environments.
(2) Description of Related Art
Many virtual environment interfaces track the user physical hand movement to control the placement of an actuator, i.e., cursor. In fully immersed virtual environments (VEs), the head and, often, some limbs of the participant immersed in the VE, are spatially tracked with six degrees of freedom. The six degrees of freedom are the X, Y, and Z coordinates of the position of the tracked body part (for example the head and a hand), and the heading, pitch, and roll of the tracked body part. The known position of the real-viewpoint, i.e., the head, is used to adjust a virtual-viewpoint of the VE in the virtual world, such that turning the head to the left in the real-world environment results in the virtual-viewpoint rotation to the left in the VE. The known position of the hand in the real-world environment is used to render a virtual hand at the respective position in the VE. Thus, extending the hand one meter out from the head in the real-world results in an apparent distance of one meter of the virtual hand from the virtual-viewpoint in the VE. Therefore, in these systems, the user's reach in the VE is constrained to the physical reach of the hand in the real-world.
Some attempts have been made to overcome this limitation. The proposed solutions extend the user's reach in various ways but do not guarantee that the user will actually be able to reach any particular object of interest.
When interaction with a VE from various distances is desired, the physical reaching distance of the hand becomes an unfeasible restriction on the reaching distance in the VE. For example, when the virtual environment is an actual landscape and the participant can virtually “fly above” the terrain, then he would not be able to reach to the ground if the participant's altitude was higher than about one meter. What is needed is a system and method that scales the distance of the head, i.e., viewpoint, to the hand, i.e., actuator, such that the user is always able to manipulate objects within the VE regardless of the distance.
Additionally, traditional immersive navigation tracks a participant in six degrees of freedom, giving the feeling that the participant is immersed in the VE. Immersive navigation is natural as the user physically looks and moves around the room to transverse the VE. However, movement is limited physically by equipment, space, and body posture. Situations that require a participant to look up or down for long periods result in fatigue and neck strain.
One method of allowing a user to interact with a VE is called trackball navigation. Trackball navigation allows a participant to orbit an object or point of interest to study it from multiple vantage points and distances. Trackball navigation is constrained, making it easy to control, but not well-suited to immersive environments. For instance, head-tracking data, which is one aspect of an immersive environment, alone does not naturally map to control of trackball parameters.
Trackball navigation can be described by envisioning a sphere centered on the trackball center. The viewpoint is generally located on the surface of the sphere, and if the participant is facing into the neutral direction in the physical world (i.e., straight ahead when seated in a fixed chair), the viewpoint is always directed towards the center of the sphere, i.e., the trackball center.
In trackball navigation, there are two operational viewpoint controls. First, the participant can move around on the surface of the sphere. The participant can rotate along the surface in a vertical plane (up and down), thus gaining a viewpoint more above or below the trackball center. In addition, the participant can move along the surface in a horizontal plane, thus rotating around the trackball center and looking at it from east, northeast, north, etc. When rotating in the horizontal plane, the viewpoint direction may be changed accordingly (west, south west, south, etc.) Second, the participant can change the radius of the sphere, which results in apparent zooming in toward and out from the trackball center.
Advantages of trackball navigation are that it closely resembles eye and hand movements of humans during manipulation of objects on a sandbox simulation. By having a multi-directional viewpoint control, the participant is able to study a point or object of interest from multiple angles quickly and easily.
Another method of allowing a user to interact with the VE is called grab navigation. Grab navigation increases the perceived range of motion for the participant. Instead of moving only in the relatively small tracked area of the physical environment, the participant is free to travel great distances with a move of the hand. One drawback is that grab navigation alone does not offer any control over the view orientation.
Using grab navigation, the participant navigates within the virtual environment by grabbing the virtual world and making a grab-gesture with his hand. As long as he maintains this gesture, the position in the virtual world where he grabbed is locked into the position of the hand. Therefore, when physically moving the hand to the right, the participant translates to the left in the virtual world. An analogy in the 2-dimensional user interface is the “grab document” action in Adobe Reader®. When a participant grabs the world and then lowers his hand, the apparent effect is that the world sinks away with the hand, or alternatively, the participant rises above the world.
Additionally, what is needed is a system, method, and computer product which allows for immersive navigation without the draw-backs of fatigue and strain to the user, and also allows for navigation in constrained environments.
In a virtual environment, a user typically indicates a single point or object of interest by making a static pointing gesture with the user's hand. Alternatively, the user indicates multiple objects of interest by sweeping the pointing gesture across the field of view. Most interfaces do not use head orientation to directly control selection. In current virtual environments, head movement is mapped to steering a fly-through of a virtual environment. Also, head-tracking data is often used to control the immediate viewpoint. However, if head orientation data could be incorporated directly into the computation of user selections, it would make it possible for a user to make selections with the user's head, or in combination with another body part.
Other interfaces for virtual environments are based on direct manipulation. However, direct manipulation is not always feasible for situations that require a user to interact with distant objects or numerous objects simultaneously. Furthermore, a familiar two-dimensional input device, like a mouse, is not usually available in a virtual environment.
Selection and pointing are often difficult in a virtual environment. Errors in the tracking data, use of a navigation metaphor in combination with immersive head tracking, and selection over long distances are several factors that make selection and pointing difficult.
A small error in tracking data, or small jitters in either the tracking equipment or the user's hand, are greatly magnified when a selection is made over a very long distance. Furthermore, the error increases proportionately as the distance increases.
In traditional virtual environments, a ray is cast from the user's finger for use as a pointer. This method requires extremely high accuracy in all measurements of joint angles or positions. A ray projected over a long distance will greatly magnify any error in such measurements, making it very impractical for applications with large-scale virtual environments. In some instances, errors and anomalous values in tracking data can be mitigated with smoothing functions or filters. However, this technique only works for errors due to poor tracking data. Errors caused by physiological and environmental constraints must be handled differently. For example, if the user unintentionally moves his hand by one degree, the movement error at a distance of 10 meters would be about 0.175 meters. However, if the distance is increased to 10 kilometers, the error is approximately 175 meters. Natural muscle movement can thus make selection using the ray casting approach very difficult.
Therefore, a need exists in the art for a method of selecting and pointing in a virtual environment with ease of use and great accuracy over long distances.