In human interface devices, in which a user manipulates a robotic device or provides some form of manipulation within a computer controlled system (e.g., a virtual reality scenario controlled by a computer or processor control system), it is very desirable to be able to sufficiently track the position and orientation of a user's movements or gestures in order to effect precise control of the robotic device or within the computer controlled system.
Robotic manipulators, commonly referred to as “robotic arms”, are one type of device in which a human interface device can be employed to facilitate movement and control of the manipulator. Robotic manipulators are typically composed of two or more generally linear links connected by joints. Typically, spatial motion of the robotic manipulators is enabled by manipulation of the joints. In other embodiments, links can be elongated and shortened in a telescopic manner to enable motion of the manipulator. In many applications, manipulator motion is performed to change the position and/or orientation of a device located at a terminal end of the manipulator. The device at the terminal end of the manipulator is typically referred to as an end-effector and can be a grasping tool that emulates the action of a human hand or some other gripping mechanism to facilitate grasping and handling of objects. Alternatively, the end-effector can be some other form of tool that does not facilitate grasping, such as a sledge or a hammer, a cutting device, a wrench or screwdriver, etc.
Motion of the end-effector with respect to a predefined base point of the robotic manipulator is controlled by motors that control joint rotation and/or elongation. For each given set of joint positions there is one unique position and orientation of the end-effector relative to the predefined base point of the manipulator, and for a given end-effector position and orientation there are typically multiple combinations of joint positions relative to each other and the base point that will achieve this given end-effector position and orientation. Such joint positioning to achieve the desired position and orientation of the end-effector is referred to as the kinematics of the robotic manipulator.
Control of the robotic manipulator can be achieved in a number of ways. For example, manipulator control can be autonomous, where the robot uses sensor input and logic to determine the motion of the links and joints without human intervention. Another type of manipulator control, commonly referred to as “Teach & Play”, involves the recording of precise settings of joint control and then playback of the settings to achieve desired specific and repetitive actions for the manipulator (e.g., useful in assembly line applications).
However, many applications in which manipulators are used require real-time control over the manipulator by a human operator. For these types of applications, the operator typically manipulates a human interface that causes each of the joints to move individually or together in some manner resulting in navigation of the end-effector from an initial position to a final position. However, these types of direct control are often non-intuitive and require training by the operator to master control of the manipulator.
One common method for implementing remote control in a manipulator is to incorporate a human interface that allows the operator to select a joint of the manipulator and then control a motor that moves the joint in a selected direction. Another joint can then be selected by the operator, with motor control being selected by the operator to effect movement of that joint. The process of selecting and moving different joints is repeated by the operator until the end-effector of the manipulator is at the desired position and orientation. The human interface for this type of operator control can be in the form of a switch and/or a joystick that facilitates drive direction and speed by operator movement, where the movement commands of the joints and manipulator control is achieved by a communication link between the human interface and the manipulator. This type of positioning of the end-effector by step-by-step or successive positioning of joints is typically used for small ground robotic manipulators such as TALON robotics available from Foster-Miller Inc. (Massachusetts, USA) or PackBot robotics available from iRobot Corporation (Delaware, USA). Robotic control in this manner is limited in that it can take a considerable amount of time for the end-effector to achieve its desired position. In addition, this type of positioning of the end-effector may not provide adequate accuracy for positioning the end-effector and also does not permit arbitrary trajectories of motion to be achieved.
Another method for providing remote manipulator control involves an exoskeleton system provided on an operator's arm, where the exoskeleton system includes sensors that measure the joint angles to determine movement of the operator's arm. The measured joint angles are communicated to the manipulator via a communication link, and a local manipulator controller then positions the joints of the manipulator to match the corresponding joint angles of the operator. This type of manipulator control is limited in that it is only applicable to humanoid manipulators (e.g., manipulators that closely resemble and have the same joints as a human arm) and the mimic control is absolute. In other words, this type of control does not provide enhanced control by relative movement instructions from any operator arm or other limb orientation.
Thus, the previously described manipulator control mechanisms provide low-level control in which individual joint positions of the manipulator are individually controlled. It would be desirable to provide a manipulator control system that provides high-level control in which the end-effector position and orientation is directly controlled instead of the focus being on direct control of individual joint positions.
In addition, it is desirable to precisely monitor and control full position and orientation of the user's movements in six degrees of freedom (6 DoF) (i.e., monitoring position changes in three dimensions along x, y and z axes as well as orientation changes of yaw, pitch and roll or rotational movements along the x, y and z axes) so as to effect precise position and orientation control of a robotic manipulator or in a virtual reality scenario operated by a computer or other processing system.
Devices known in the art which are used to provide both position and orientation data so as to achieve 6 DoF data are complex and typically require the use of multiple sensors for the device along with external observation sensors in order to achieve reliable position and orientation data. While orientation tracker sensors do currently exist that provide real-time output of orientation changes, these sensors typically determine position changes by measuring acceleration forces which are integrated to determine instantaneous velocity values which are then used to determine displacement or position changes relative to a starting position. Determining position changes via accelerometer force data integration calculations leads to inaccuracies because of the noise associated with such calculations. This is why typical position monitoring and control devices must employ additional, external tracking sensors (such as video, ultrasound, magnetic field and/or other sensing equipment) to achieve reliable information about changes in position.
It is desirable to provide a position and orientation tracking system that is simple in design and is further reliable in tracking a full range of motions or gestures by a human so as to generate 6 DoF tracking information for achieving a desired movement or other control for a robotic manipulator or a computer or other processing system.