Traditional methods for grasping a complex three-dimensional object using a robot hand are tedious and consumptive in terms of time and processing resources. One iterative manual approach consists of (1) manually moving the robot hand to near the object, (2) positioning the fingers around the object for a desired grasp, (3) using a command program to initiate closure of the fingers around the object, (4) checking to see if the object is securely held by manually attempting to shaking the object out of the robot grasp, (5) making a mental judgment of whether other robot finger positions would be better, and, if so, (6) repeating above steps until the operator is satisfied with the grasp.
The difficulties relates in part to the need to independently control each of the large number of degrees of freedom (DOF) associated with the hand.
In addition to being time and energy consuming, it is difficult or impossible to achieve efficiently a best or ideal grasp for a specific task. And the likelihood of doing so decreases in reverse proportion to the degrees of freedom of the robot hand being used, especially at higher degrees such as six, ten, twelve, or more. That is, the difficulty of planning a feasible and robust grasp is much higher in real-time for grasping devices having more separately-actuatable elements.
A five-finger hand, for instance, can have twelve or more degrees of freedom (DOF), or separately-movable elements. A DOF breakdown for an example five-finger robot hand resembling a human hand includes four DOF for the thumb, three DOF for each of the index finger and the middle finger, one DOF for the ring finger, and one DOF for the little finger.
As another example, a human hand has been modeled with twenty (20) DOFs.
Some conventional systems grasp objects based simply on feedback from real-time vision systems and/or touch sensors. Some attempt to grasp objects based only on feedback from a camera and/or one or more touch on-robot sensors, or by human, manual, control or teaching of a robot hand. All are computationally expensive, and time consuming for performing efficiently in real-time applications.
Active research in automatic robotic grasp planning includes grasp contact modeling, grasp quality evaluation, and grasp optimization. Contact modeling includes modeling effects of contact frictions at one or more grasp contact locations. Quality evaluation includes quantifying grasp quality based on a set of grasp quality metrics.
Other difficulties associated with planning an optimal grasp for a robotic hand include the complex kinematic structure associated with the many degrees of freedom, complexities of the geometric shapes of objects to be grasped, and the need to determine the frictional forces required at each grasp contact—i.e., at each hand to object contact points.
Conventional approaches, such as academia, for planning a robotic grasp can be grouped roughly into three groups. The groups include classic motion planning using collision detection, direct-construct grasp contacts based on grasp-closure properties, and grasp planning using explicit kinematic hand constrains.
Example classic motion planning with collision detection use probabilistic roadmaps (RPM), rapidly-exploring random trees (RRT), and collision-detection techniques to generate a high number of candidate grasps, such as thousands of candidate grasps, and then to evaluate the candidate grasps for grasping qualities. Generating thousands of candidate grasps is, alone, time and computation consuming. Selecting a preferred grasp according to this approach, by evaluating each of the generated candidates, takes even more time and processing. Generating the grasps and selecting one can take an hour or more.
Three main bottle necks to reaching a speed solution exist in this approach. The first one is the complexity of object modeling required. A second is the need for explicit collision detection. A third is the high dimension space of robot hands to search a valid robotic grasp.
The second conventional approach mentioned above, regarding constructing grasp contacts based on grasp-closure properties, starts with a grasp quality metric, such as form closure or force closure, then attempts to construct such an optimized grasp using a given number of grasp contacts for specific shaped objects in two or three dimension. This approach is also time and computationally intensive and, like the other conventional approaches, still does not reliably result in a feasible grasp using a robotic gripper, or grasping device.
The third above-mentioned conventional approach plans grasps using explicit kinematic hand constrains, robot hand data. The approach starts with a set of good initial feasible grasp positions and then iteratively refines grasp positions based on a grasp quality measure by exploring adjacent grasp candidates that satisfy the kinematics of the robotic hand.
Disadvantages of this approach include the requirement for a good initial feasible grasp position, because otherwise the search process terminates with a failure or a local minimum position. Another disadvantage is that the process, including the needed convergence on a preferred grasp, which can include fifty to eighty iterations, or more, is slow and computationally intensive.