1. Field of the Invention
The techniques disclosed herein relate generally and broadly to autonomous robotic devices, and to robotic devices for Human Robot Interaction (HRI) and/or communication.
2. Description of Related Art
Robot Companions
Related art to the disclosed techniques can be found the following areas:                fields which involve interaction between robotic or robot-like devices and humans, such as the fields of robot companions, robot assistive or communicative devices in patient care or robotic and robot-like educational devices, and robot pets.        fields which involve sensing devices employed with robots and devices for imparting motion to robots.        
In discussing human-robotic interaction, the human may be referred to as the user of the robot, the user in this case being the human who interacts with the robot as a companion or analogously to a human interacting with an animal companion. A human who is controlling or directing a robot that is interacting with a user, is referred to as the operator of the robot, or as the puppeteer of the robot.
Robots that act or react under their own control are acting autonomously. Robots that are directed by an external entity such as an operator or a separate control computer are not acting autonomously.
Human-robotic Interaction
Robotic technologies are entering our everyday lives not only as tools and appliances but as daily companions. In healthcare, robots such as the Paro described by Shibita et al in “Mental Commit Robot and its Application to Therapy of Children,” in International Conference on Advanced Intelligent Mechatronics, Como, Italy, 2001, pp. 1053-1058, and the NeCoRo from Omron Corporation, www.necoro.com and referenced in Libin and Libin, “Person-Robot Interactions From the Robopsychologists' Point of View: The Robotic Psychology and Robotherapy Approach”, Proceedings of the IEEE Vol 92 (11): pp. 1-15, 2004, are designed to function as simple robotic pet surrogates in hospitals and nursing homes in cases in which companion animals are not available.
Other similar robots are Robota by Billard et al, “Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children with Autism,” RESNA Assistive Technology Journal (In Press), 2006, the Keepon by Kozima et al, “Designing and Observing Human-Robot Interactions for the Study of Social Development and its Disorders,” in IEEE Symposium on Computational Intelligence in Robotics and Automation, Espoo, Finland, 2005, and investigations by Scassellati, “How social robots will help us to diagnose, treat, and understand autism,” in 12th International Symposium of Robotics Research (ISRR), 2005, have explored the use of robots to interact with autistic children.
Further, the robot Pearl described in “Pearl: A Mobile Robotic Assistant for the Elderly,” in AAAI Workshop on Automation as Eldercare, 2002 by Pollack et al has been developed as a mobile robot platform for use in nursing homes, while the work of Mataric reported in “Socially Assistive Robotics,” IEEE Intelligent Systems, pp. 81-83, 2006 has explored using socially assistive robots that combine human-robot interaction with assistive robotics.
One common thread which runs through these robots and others like them is that the experience that the human has through interacting with the robot is an important aspect of the overall design. These robots may have expressive faces, be outfitted with various sensors, or feature software behavior systems that all contribute to the overall interaction and encourage continued use while limiting boredom. In order for the robots to provide a therapeutic benefit, the human must be encouraged to interact continually with the robot.
However, existing art in these areas is generally limited in the behaviors that can be exhibited by the robots, and the types or details of sensor input data to which the robot has access. The mechanical and physical design of existing robotic art results in “robotic” motion, appearance, or texture that detracts from the ability for the robot to engage in convincing social interaction with a human.
It is also known that there are many positive benefits of companion animal therapy in terms of lowering stress, reducing heart and respiratory rate, and showing positive changes in hormonal levels, as well as mood elevation and social facilitation. The use of animals for this form of therapy is often seen as the best solution. There are practical limitations in the existing forms of this therapy; for example, these companion animals are not readily available in some healthcare and education facilities due to fears of bites, allergies, or disease. Additionally, when this form of therapy is offered it is generally as a regulated experience. An animal therapist must always be present, so sessions must occur as a scheduled activity once or twice a week, with sessions only lasting a few hours.
As an alternative to companion animal therapy, a new form of therapy, robot companion therapy or robot therapy, has emerged. Here robotic companions take the place of animals. Studies have been conducted using Sony's AIBO by Tamura et al in “Is an Entertainment Robot Useful in the Care of Elderly People with Severe Dementia?,” The Journals of Gerontology, vol. 59A, pp. 83-85, 2004; the Paro by Kidd et al in “A Sociable Robot to Encourage Social Interaction among the Elderly,” in International Conference on Robotics and Automation (ICRA2006), 2006; and the NeCoRo by Libin and Cohen-Mansfield in “Therapeutic robocat for nursing home residents with dementia: preliminary inquiry,” Am J Alzheimers Dis Other Demen., vol. 19, pp. 111-6, 2004. All of these studies show promise for this new form of therapy, specifically with improving people's mood and increasing social interaction. However, even with these results there is still much room for improvement in the designs of such robotic companions.
Robotic Sensors and Motion Devices
Existing art is also known for robotic sensors. For example, Lumelsky et al in “Sensitive Skin,” IEEE Sensors Journal, vol. 1, pp. 41-51, 2001, describe a “sensitive skin.” They describe such a sensory system as consisting of a large variety of sensors with processing capabilities that cover the entire surface of the robot. Recently there have been other implementations of “sensitive skin.” One such system uses surface covers for protection and better control as described by Iwata and Sugano in “Whole-body Covering Tactile Interface for Human Robot Coordination,” presented at International Conference on Robotics and Automation, Washington, D.C., 2002. Another “sensitive skin” is focused on the detection of temperature and pressure in a single flexible skin, Omeya et al in “A large-area, flexible pressure sensor matrix with organic field-effect transistors for artificial skin applications,” Proceedings of the National Academy of Science USA, vol. 101, pp. 9966-9970, 2004. Other researchers have focused on the processing capabilities of such skins. A review of these approaches by Paradiso et al can be found in “Sensate Media—Multimodal Electronic Skins as Dense Sensor Networks,” BT Technology Journal, vol. 22, 2004.
In these prior art references, the goals for these skin designs were primarily to keep the robot from damaging itself or the people around it, or to sense the physical properties of objects. This is distinct from “affective touch” sensing, affective touch being defined as touch which either displays or evokes emotions or an apparent emotional response by haptic means, such as a comforting touch on the shoulder. This is also distinct from sensing of “sociable touch” or “social touch” and social communication, social touch being defined as touch and touch-gestures that communicate a particular social aspect or information, for example, a congratulatory pat on the shoulder that communicates approval and as such is a type of social reward. The realm of social or affective touch has been largely ignored or only weakly addressed in the prior art of “sensitive skin” and robotic design, and is addressed by the techniques disclosed here. In the prior art, most robotic systems feature only a small number of discrete tactile sensors, if any, and thus are not capable of distinguishing fully the wide variety of affective, social, or other higher order forms of touch.
Limitations of Other Systems, Advantages Over Prior Art
As can be readily appreciated, the existing art is subject to numerous and diverse limitations that are overcome by the techniques disclosed herein. The limitations include many which detract, often to a substantial degree, from the social or other interaction between the robot and the human, or from the ability of the human to make use of a robot-like device in a convenient or natural fashion.
Limitations Relating to a User's Perception of a Companion Robot
The Appearance of a Companion Robot
To encourage social interaction between a human and a robot, the robot should not have a robot-like appearance, as is the case in most existing robots. A robot-like appearance, as is typical of the current art, is inexpressive, cold, and unemotional. The robot's appearance should be such that it looks comfortable, and encourages the human to treat the robot in a social fashion.
The Feel of a Companion Robot
The feel of a companion robot when a user touches it is as important to the interactions between the user and the robot as is the robot's appearance. A robotic companion should be as pleasant to touch as is an animal pet, and should have a similar feel with touched to an animal pet. Existing robots generally have a hard, metallic, or other type of surface that does not encourage tactile interaction by the user, or do not feel realistic.
The Motion of a Companion Robot
A related aspect is that as the robot moves it should be quiet, without the noisy gear sounds typical of the prior art found in many existing robotic platforms. These sounds distract from the “illusion of life” and impede the interaction.
Further, the robot as it moves should not be a danger to the user. For example, the amount of output force by the motors should be limited, and if possible, the robot motion actuators or motors should be back drivable or use sensors to detect potentially dangerous situations, and the robot should include a control system or employ equivalent techniques to avoid or remediate these dangers.
Additionally, the robot should have some way of conveying its internal state through expressive motion and sounds, and this should be clear and understood to the user interacting with it. Companion animals have very clear behaviors expressed through their facial expression, body posture, and sounds. The robotic companion should feature an ability to convey behavior in a similar fashion. Existing robots however, even with many degrees of freedom, do not generally have even a mechanical design that facilitates expressive action. For example, ear and head movement among other degrees of freedom could be used to demonstrate a large range of expression, but such movement is not included in most examples of the existing art, or is limited.
Limitations Relating to Autonomous Robotic Behavior
Autonomous robotic behavior is a sequence of actions taken by the robot in response its environment or events in its environment, without the behavior being controlled or initiated by a separate entity, such as a human operator: the sequence of operations may further vary autonomously. This is distinct from non-autonomous behavior, which is defined as actions controlled directly by a separate entity, such as human operator or puppeteer. This is further distinguished from semi-autonomous behavior, which is defined as a sequence of actions taken by the robot that are initiated by a command or input from a separate entity, but then carried out autonomously.
Visual communication and interaction, as well as audible communication and interaction, are also important in companion animal therapy, both from the human observing and responding to the animal, and the animal observing and responding to the human. These interactions include such simple interactions as the human looking at the animal and the animal “autonomously” looking back, it can be through touch such as petting or scratching the animal with the animal exhibiting a visual and/or audible response, it can also be through sound such as calling the animal's name or talking to it, holding the animal in a cuddling position, or the animal responding with gestures such as wagging the tail or by vocalizations such as purring. Autonomous behaviors in prior art for robotic companions have been limited, allowing only for simple interactions such as following a colored ball from visual sensing, responding to specific spoken words, or generating a yelping sound if the robot is struck, and generally do not combine senses and responses. Prior art in some circumstances supports autonomous behavior for such mechanical manipulations as the grasping of objects, but includes autonomous behavior neither for affective or social interaction, nor in response to affective or social touch.
Some of the types of tactile interactions known in animal companion therapy include hugging the animal, rocking it like a baby, petting it, tickling it, and carrying it, and the animal responding by purring, snuggling, wagging the tail, or coming close to the human. Thus, not only is it important for a robotic companion to feel pleasant to the touch, but the robot should also be capable of detecting a wide range of tactile interactions. This capability requires a dense network of tactile sensors and appropriate processing: this sort of capability is not generally found in the existing art, or is very limited.
Additionally, the use of multiple modalities of sensing, such as force in combination with temperature and other somatic sensing, more closely resembles the types of sensors in the skin of a real animal and can improve the distinction of various different types of affective touch. Multiple modalities of sensing, such as in the form of a “sensitive skin”, are rare in the existing art, if found at all.
Further, the “sensitive skin” of a robot should be full body. If a user were to touch the robot somewhere on its body, and the robot failed to respond, the failure to respond would instantly break the illusion that the robot was alive in the same fashion an animal companion. This illusion of life helps to improve the overall interaction, and the lack of a full body skin, as is the generally the case in the relevant prior art, detracts from the illusion of life and thus limits the usability of these robots as companion robots.
Sensing orientation in space is also important. This is not limited to the orientation of the robot but rather also includes the spatial relationship between the user and the robot, and potentially to objects in the environment. Both aspects of orientation are needed for relational interactions between the robot and the user, for example, when an animal turns to look at a human as the human is petting it or when the human calls its name, or the animal responds to being held in different orientations. Inertial measurement sensors, tilt switches, or other sensors such as could be used to allow the robot to know its own orientation, are generally absent in the existing art, or are used only in a limited fashion and the robots are unable to respond to affective interactions in a general fashion.
Array microphones, cameras, and other similar types of sensing can provide a robot with a sense of location of the user with respect to the robot. Such capabilities are found in prior art systems, if at all, only in a limited fashion.
Limitations on the Prior Art Use of Robotic Companions
One aspect in which robotic companions can be different from companion animals is their ability to collect and share data. In health care, education, and other settings, such a capability could be greatly beneficial, but such robotic capabilities are not generally found in the prior art. Prior art robotic companions do not relay potentially harmful situations encountered by a user to the user's caregivers. The prior art robotic companions are thus unable to function as an active member or as an autonomous device in a health care or educational team, and not just as a medical sensor or as a toy.
Existing robotic companions also of limited usefulness in rural health or home health.
Returning to the aspects of techniques concerning “sensitive skin”, Human Robot Interaction (HRI) applications pose a specific set of challenges in the design of robotic “sensitive skins.” Unlike the prior art of robotic manipulation in which a robot arm or manipulator must only deal with objects, in HRI applications the robot must be able to interact with both people and objects. This human interaction includes a realm of social touch, such as greeting with a handshake, and affective touch, such as petting a robotic companion. Social touch is not well-addressed in the prior art, in part because the robotic designs of the prior art feel to the user like a constructed machine.
A capability lacking in the prior art is that the “sensitive skin” be able to distinguish interaction with a user from interaction with an object. A robotic companion should be able to distinguish if it is sitting on someone's lap or a tabletop.
The skin itself must convey the “illusion of life” through full body coverage of sensors, such that it can respond to touch on any part its body, just as an animal can respond to touch on any part of the animal's body. No matter how lifelike a robot moves, if it is touched and does not respond to the touch, this illusion of life is instantly broken. Any such failure to respond can be frustrating to the user and affect the interaction. Closely related to this full body challenge for a “sensitive skin” is the fact that the skin itself must be designed to cover the complex geometry of the surface of a robot.
How the skin feels to a user touching the robot is equally important. It should feel pleasant to touch and not distract from the interaction. For example, if the robot is designed to look like a realistic or fantasy animal, it should not feel hard or excessively soft, it should have the feel of a realistic animal. Existing robotic designs do not have skins that give this feel when touched.
Further, the “sensitive skin” must be able to detect a wide variety of social or affective touch interactions, such as handshakes, petting, tickling, slapping, or tapping, among others. Prior art is generally unable to detect affective or social touch interactions, being limited in the spatial resolution of the sensors, limited in the completeness of sensor coverage in all areas of the skin, limited in sensitivity of the sensors, or otherwise limited.
It is thus an object of the techniques disclosed in the following to provide improved robotic companions which overcome these and other limitations of prior art robotic companions.