The present invention relates generally to haptic rendering and more particularly to a ray-based haptic rendering technique for touching and feeling arbitrary three-dimensional (3D) polyhedral objects in virtual environments (VEs).
As is known in the art, advances in virtual reality and robotics have enabled the human tactual system to be stimulated in a controlled manner through force-feedback devices, also referred to as haptic interfaces. A haptic interface is a device that enables manual interaction with virtual environments or teleoperated remote systems. Such systems are typically used for tasks that are usually performed using hands in the real world.
Force-feedback devices generate computer-controlled forces to convey to the user a sense of natural feel of the virtual environment and objects within it. In this regard, haptic rendering can be defined as the process of displaying computer controlled forces on the user to make the user sense the tactual feel of virtual objects.
Haptic interface systems typically include an end effector or probe with which the user interacts with the haptic interface system. Conventional haptic interface systems represent or model the probe as a single point in a virtual environment. Such haptic systems are thus referred to as point-based systems.
Through the single point representation of the probe, the user is able to explore the shape, surface details, and material properties of the virtual objects. Since the probe is represented as a point, the net force is the only haptic feedback that can be sent to the user. For exploring the shape and surface properties of objects in virtual environments (VEs), point-based techniques provide the users with force feedback similar to that which the users would experience when exploring the objects in real environments with the tip of a stick. While point-based haptic systems are computationally efficient, they only enable the user feel interaction forces, not the torques. Thus, one problem with the point-based techniques is that they fail to simulate tool-object interactions that involve multiple constraints.
To simulate real tool-object interactions, the computational model of the simulated tool cannot be reduced to a single point since the simulated tool must be able to contact multiple objects and/or different points of the same object simultaneously as does a real tool. Moreover, the resulting reaction torques have to be computed and reflected to the user to make the simulation of haptic interactions more realistic.
Several haptic rendering techniques have been developed to render 3-D objects. Just as in computer graphics, the representation of 3-D objects can be either surface-based or volume-based for the purposes of computer haptics. While the surface models are based on parametric or polygonal representations, volumetric models are made of voxels.
Although a single point is not sufficient for simulating the force and torque interactions between three-dimensional (3-D) objects, one approach to allow such simulation is to use a group of points. For example voxel-based approaches for six degree-of-freedom (6-DOF) haptic rendering have been proposed. In this approach, static objects in a scene are divided into voxels and the probe is modeled as a set of surface points. Then, multiple collisions are detected between the surface points of the probe and each voxel of the static object to reflect forces based on a tangent-plane force model. A tangent-plane whose normal is along the direction of the collided surface point is constructed at the center of each collided voxel. Then, the net force and torque acting on the probing object is obtained as the summation of all force/torque contributions from such point-voxel intersections. Although this approach enables 6-DOF haptic interactions with static rigid objects, its extension to dynamical and deformable objects would significantly reduce the haptic update rate because of the computational load. Moreover, it is difficult to render thin or small objects with this approach.
Another problem with conventional point-based and voxel-based systems, however, is that they do not allow realistic representations of side collisions such as a collision between an object and a portion of the probe other than an end point. Such systems also fail to provide realistic representations of tools having a length such as surgical instruments, for example.
It would, therefore, be desirable to provide a haptic system in which the probe is not modeled as a single point. It would also be desirable to provide a haptic system which computes forces due to collisions between different portions of the probe and one or more virtual objects. It would further be desirable to provide a haptic system which computes and displays reaction forces and torques. It would be still further desirable to provide a ray-based haptic rendering technique that enables the user to touch and feel objects along the length of a probe.
A force-feedback system includes a probe modeled as line segment or ray in a virtual environment.
With this arrangement a ray-based force-feedback system which provides representations between an object and a portion of the probe including but not limited to an end point is provided. The system can thus provide a realistic representation of a tool, such as a surgical instrument, having a length. By connecting a pair of force-feedback devices, the ray-based force-feedback system of the present invention provides a system in which the user is exposed to torques in addition to the forces, both of which are essential in simulating tool-object interactions. By allowing a user to explore an object with a probe having a length, the haptic perception of some 3D objects using the ray-based rendering technique of the present invention is better than the existing point-based techniques.
The ray-based haptic rendering technique of the present invention enables the user to touch and feel convex polyhedral objects with a line segment model of the probe. The ray-based haptic rendering technique of the present invention not only computes the forces due to collisions between the probe and virtual objects, but also the torques that are required to be displayed in simulating many tool-handling applications. Since the real-time simulation of haptic interactions (force/torque) between a 3D tool and objects is computationally quite expensive, ray-based rendering can be considered as an intermediate step towards achieving this goal by simplifying the computational model of the tool. By modeling the probe as a line segment and utilizing the ray-based rendering technique of the present invention, users have a more rapid haptic perception of 3D convex objects than when the probe is modeled as a point.
Haptic systems which model a probe and objects using ray-based techniques have several advantages over the conventional haptic systems which model probes using point-based techniques. First of all, side collisions between the simulated tool and the 3D objects can be detected. Thus, a user can rotate the haptic probe around the corner of the object in continuous contact and get a better sense of the object""s shape. Second, ray-based rendering provides a basis for displaying torques to the user. Using the ray-based rendering technique, one can compute the contact points, depth of penetration, and the distances from the contact points to both the ends of the probe. Then, this information can be used to determine the forces and torques that will be displayed to the user. Third, the ray that represents the probe can be extended to detect the collisions with multiple layers of an object. This is especially useful in haptic rendering of compliant objects (e.g. soft tissue) or layered surfaces (e.g. earth""s soil) where each layer has different material properties and the forces/torques depend on the probe orientation. Fourth, it enables the user to touch and feel multiple objects at the same time. If the task involves the simulation of haptic interactions between a tool and an object, ray-based rendering provides a more natural way of interacting with objects. Fifth, the reachable haptic workspace can potentially be extended using this technique since the technique allows full control of forces and torques that are displayed to the user. This means that it may be possible to create an illusion of touching distant objects by virtually extending the length of the probe and appropriately changing the direction and magnitude of the reflected forces.
For example, in performing minimally invasive surgeries, the surgeon inserts thin long rigid tubes into the body of the patient through several ports. Small size instruments attached to these tubes are used for manipulating internal organs. During surgery, the surgeon accesses the targeted area by pushing the organs and surrounding tissue aside using the instruments and feels both the interaction forces and torques. A point-based technique is inadequate to fully simulate such haptic interactions between surgical instruments and virtual organs. If the instrument is modeled as a single point, the side collisions of an instrument with organs will not be detected and the instrument will pass through any organ other than the one touching the tip. In addition, multilayered and damaged tissues whose reaction forces depend on the tool orientation can be simulated using the ray-based technique if the ray is extended along the contacted surface and multiple collisions with the layers of the virtual object are detected to compute interaction forces.
Another example where the ray-based rendering is preferable would be the simulation of assembly line in car manufacturing. A scenario may involve a mechanic going under a virtual car and turning nuts on an engine block. Some of these procedures are done through mechanical instruments attached to a long and rigid shaft which enables the mechanic to reach difficult areas of the engine. Typically, the vision is limited and the mechanic finds his way around using haptic cues only. Moreover, the path to the nuts is usually blocked by several other mechanical components which makes the haptic task even more challenging. The simulation of this procedure in virtual environments will certainly involve the modeling of torques and detection of multiple collisions simultaneously since a long rigid shaft is used to reach the targeted areas.
In accordance with the present invention, the probe is represented as a ray or line segment for the purpose of collision detection. The haptic interaction system described herein as ray-based haptic rendering includes means for allowing interaction along a ray or a line segment or segments rather than only at a point as in prior art techniques. The system also includes means for rapid computation of collisions between an end-effector (or generic probe) of a haptic device and one or more three-dimensional (3D) objects in a scene. With this particular arrangement, a ray-based haptic interaction system capable of haptically rendering side collisions and tools having a length is provided. The system and techniques can also provide additional haptic cues for conveying to the user the shape of objects.
The haptic interaction paradigm of the present invention as described herein enables a user to touch and feel 3-dimensional polygon-based objects in virtual environments via a haptic interface. In this regard, the technique of the present invention computes these forces and enables the user to touch and feel arbitrarily shaped 3D objects in a synthetic environment as if they exist and as if they are being touched with a probe which has a length (rather than simply being a point). Using the interface device and the developed methods and techniques, a system user feels as though they are touching invisible objects. The ray-based haptic interaction technique of the present invention differs from previously developed techniques since the technique of the present invention utilizes interactions between a finite ray (i.e. a line) and virtual objects rather than between a point and virtual objects. This feature enables the computation of torques, detection of side collisons, and simulation of tool-object interactions in a more realistic manner than heretofore.
The haptic interaction technique works with force reflecting haptic interfaces and enables the user to feel the forces that arise from interactions between simulated instruments and objects. In order to demonstrate the practical utility of the developed techniques, a force-reflecting device developed at the Artificial Intelligence Laboratory of MIT, (referred to as a PHANToM,) was used as the haptic interface. This device is now available in the market through SensAble Technologies, Inc. It is a low friction force-feedback device which can fit on a desk-top and contains 3 motors which control translational forces exerted on the user""s finger tip. A pencil shaped probe (stylus) or a thimble can be used as the end effector held by the user.
In ray-based haptic interactions of the present invention, the generic probe of the haptic device is modeled as finite ray having a particular orientation and the collisions are checked between the ray and the objects. The collision detection techniques return the collision point, which is the intersection point between the ray-based probe and the object. In the point-based case, the only possible collision situations are point-point, point-line, and pointxe2x80x94polygon collisions. However, the collisions in the ray-based case can in addition be line-point, line-line, line-polygon collisions. The ray based haptic rendering technique of the present invention allows representations of side collisions that can provide to the user more cues about the shape of the object than those provided through only tip collisions. There can also be multiple contacts composed of a combination of the above cases. Consequently, computations for ray-based interactions must handle more collision cases and are more complicated than in the point-based approach.
With the present technique, a new ray-based haptic interaction technique for rendering objects in VEs is provided. It should be noted that it is not necessary to utilize a new technique for detecting collisions between the haptic interface and the virtual objects. Rather a new paradigm that enables the user to interact with virtual objects via a ray rather than a point is provided. The major advantages of this paradigm are discussed below. It should be noted that the collision detection techniques (e.g. point-polygon, line-line, line-polygon, etc.) used in the ray-based haptic interactions have been intelligently blended with a rule-based methodology for collision response.
Some advantages of interacting with a probe modeled as a line segment rather than a point are as follows. The ray-based rendering technique of the present invention considers the interactions between the line segment model of the probe and the virtual objects. In the ray-based rendering technique of the present invention, the probe is modeled as a line segment rather than a single point, hence, the side collisions between the probe and the virtual objects can be detected. This enables the user to feel the forces that arise from side interactions. Also, multiple collisions can be detected using ray-based rendering technique. This enables us to haptically render multi-layered structures (i.e. the ray will collide with each layer of structure) or to touch and feel multiple objects in the scene at once (i.e. the ray will collide with multiple objects). The total force that will be reflected to the user can be calculated simply by adding up the forces resulting from each collision.
Additionally, torque (i.e. force multiplied by distance) can be calculated using a ray-based rendering technique. A pair of PHANToMs connected to opposing ends of a probe can convey to the user feeling of torques about some axes. Other haptic devices can also convey to the user feeling of torques about some axes. Furthermore, it is expected that progress will be made in hardware development in the future and more sophisticated rendering techniques will be required to display both forces and torques. In this regard the ray-based technique described herein makes the first steps towards rendering torques in virtual environments. In view of the disclosure of the ray-based technique of the present invention, the concepts and techniques can be easily extended in the future to display torques when torque reflecting haptic interfaces become increasingly available. In one exemplary system, even with a single PHANTOM, some users feel the illusion of torques when interacting with multiple objects in virtual environments via ray-based haptic interaction technique. Although the PHANTOM can be used as a force feedback device, the techniques of the present invention are general enough to work with other types of force feedback devices that enable the user feel the forces that arise from interactions between a variety of simulated tools and objects.
The developed methods and techniques of the present invention are more suitable to simulation of xe2x80x9cinstrument-objectxe2x80x9d interactions (such as surgical instrument biological tissue interactions) than are prior art techniques. Prior art techniques consider forces that arise from only the position of the probe tip. In the present technique, the interaction forces using both probe tip and orientation are computed. As the user manipulates the probe of the PHANToM in real world, the techniques of the present invention detect the collisions between the simulated probe and the objects in virtual world in real-time. Since the orientation of probe is considered during the computation of collisions, the techniques of the present invention are more suitable to simulate the xe2x80x9cinstrument-objectxe2x80x9d interactions than existing ones.
The techniques of the present invention when combined with a haptic interface system can have several applications in areas including but not limited to medicine, telemedicine, computer animation, teleoperation (remote manipulation), entertainment, rehabilitation, education, training, design, manufacturing, marketing, hazardous operations, information visualization and teleconferencing. The techniques of the present invention can be used in virtual reality simulations as well as remote manipulation of objects where xe2x80x9chands-onxe2x80x9d experience is important. For example, surgeons and medics use their hands frequently to manually explore the internal organs and tissues and to manipulate surgical instruments. The techniques of the present invention when used with a force feedback device can be used to train such medical personnel. Using the techniques described herein, the user can feel the forces that arise from xe2x80x9csurgical instrument-biological tissuexe2x80x9d interactions in virtual environments (e.g. one can use the probe of the force feedback device as the scalpel and make an incision on a virtual patient). The feeling of interaction forces provide a sensory feedback to the user about the precision of the incision and the characteristics of the tissue). In the area of remote manipulation and telerobotics, the techniques can be used to provide the operator of the telerobot with a sense of the reaction forces. This will increase the effectiveness of the operator in applications such as the control of remote devices and vehicles (e.g. handling of hazardous materials). In the area of entertainment, the techniques of the present invention provide a new way to interact with arbitrarily shaped 3D objects in virtual environments. It is also envisioned that this will result in the generation of more interactive and immersive computer games and simulations.