Robot Programming by Demonstration (PbD) is a recent trend in robotics, employed to transfer new skills to robots from observations of tasks demonstrated by humans or other robots. A typical robot PbD learning process consists of observing the demonstrations (task perception step), followed by task modeling and planning steps, leading to task execution by the robot learner (also called task reproduction step). Perception of the demonstration(s) can be done using different types of sensors, for example vision sensors, electromagnetic sensors, inertial sensors, or when a robot is employed for demonstrating a task, joint (sometime referred to as articulations) measurements of the robot can be employed for task perception.
Despite the applicability of different types of sensors for task perception, vision sensors, such as cameras, are of particular interest due to the non-intrusive character of the vision-based measurements.
Remote fixed cameras have been employed in the past for robots' teaching by demonstration. For example, visual PbD can be used to reproduce human action with a robot, having a fixed camera receiving data representing motions of the human demonstrator performing the action, for a robot to emulate the observed action. This approach is however aimed at robot being taught to perform the movements in a similar manner like the human demonstrator, without visual servoing (i.e., vision-based control) during the execution of the task by the robot.
Recent research also tried to combine PbD with visual servoing. In some methods, a human demonstrator guides manually the robot links so that an eye-in-hand camera (i.e., a camera mounted on robot's end-point) records visual parameters of the task along with joint measurements corresponding to the task. These measurements may be used to obtain a generalized robot arm trajectory from several task demonstrations. Visual servoing from the eye-in-hand camera along with joint servoing is then used to follow the obtained generalized trajectory. Alternatively, a camera may be attached on a human demonstrator's limb with different joint angle or position sensors to teach the movement to be generalized, which may require scaling of the human's links trajectories to the robot's joints controls. Such methods could be categorized (in either cases) as kinesthetic demonstrations, and they are designed to teach robot trajectories from the standpoint of the robot's structure, as opposed to teaching manipulated objects trajectories.