This invention relates to the field of coordinating multiple agents in collective searching, specifically coordinating mobile robots for collective searching.
Many challenging applications in robotics involve distributed searching and sensing by a team of robots. Examples include mapping minefields, extraterrestrial and undersea exploration, volcano exploration, location of chemical and biological weapons, and location of explosive devices. In general, such applications can involve rough terrain including obstacles, non-stationary and dilute search goals, deliberate interference with the searchers, and limited opportunities for human interaction with the searchers. Limited human intervention makes teleoperation problematic, and suggests a need for decentralized coordination schemes which feature collective decision-making by individual autonomous robots. Cost considerations when applied to large groups of searchers suggest a need for distributed coordination that uses shared data to overcome limited sensor precision.
Designing a robot team to search a sensate region for a specific target phenomenon involves numerous engineering tradeoffs among robot attributes and environmental variables. For example, battery-powered robots have a finite energy store and can search only a limited area before depleting it. Success in finding a target source with finite energy resources can also depend on other characteristics of the robot such as sensor accuracy and noise, and efficiency of the locomotive subsystem, and well as properties of the collective search such as the number of robots in the team, the use of shared information to reduce redundant search, and the team coordination strategy used to ensure a coherent search process.
Numerous team coordination strategies have been proposed. See, e.g., Cao et al. xe2x80x9cCooperative Mobile Robotics: Antecedents and Directionsxe2x80x9d, Proceedings of IEEE/RSJ IROS (1995). Strategies for cooperative action encompass theories from such diverse disciplines as artificial intelligence, game theory and economics, theoretical biology, distributed computing and control, animal ethology, and artificial life. For example, Reynolds simulated the formation of flocks, herds, and schools in which multiple autonomous agents were driven away from obstacles and each other by inverse square law repulsive forces. See Reynolds xe2x80x9cFlocks, Herds, and Schoolsxe2x80x9d, Computer Graphics, Volume 21 No. 4, pp. 25-34 (1987). Part of the motivation behind Reynolds"" work is the impression of centralized control exhibited by actual bird flocks, animal herds, and fish schools, despite the fact that each agent (bird, animal, or fish) is responding only to its limited range local perception of the world.
Most current coordination strategies do not include a formal development of the system dynamics. See, e.g., Brooks xe2x80x9cIntelligence Without Reasonxe2x80x9d, Proceedings IJCAO-91 (1991); Misawa xe2x80x9cDiscrete-Time Sliding Mode Control: the Linear Casexe2x80x9d, Journal of Dynamic Systems, Measurement, and Control, Volume 119 (1997). Consequently, important system properties such as stability, reachability, observability, and robustness cannot be characterized. Many of the schemes rely on stable controls at a lower level and provide coordination at a higher level. The coordination is often heuristic and ad hoc.
Appropriate coordination strategies can be used in applications beyond teams of physical robots. For example, autonomous software agents, properly coordinated, can search information or trends in cyberspace or other electronic storage.
Accordingly, there is a need for a coordination method that can use shared information to reduce energy expended, compensate for noisy or deliberately misleading sensors, and allow robust collective searching.
The present invention comprises a decentralized coordination strategy called alpha-beta coordination that can use shared information to reduce energy expended, compensate for noisy or deliberately misleading sensors, and allow robust collective searching. The alpha-beta coordination strategy is a family of collective search methods that allow teams of communicating agents to implicitly coordinate their search activities through a division of labor based on self-selected roles and self-determined status. An agent can play one of two complementary roles. An agent in the alpha role is motivated to improve its status by exploring new regions of the search space. An agent in the beta role is also motivated to improve its status, but is conservative and tends to remain aggregated with other agents until alpha agents have clearly identified and communicated better regions of the search space. An agent can select its role dynamically based on its current status value relative to the status values of neighboring team members. Status can be determined by a function of the agent""s sensor readings, and can generally be a measurement of source intensity at the agent""s current location. An agent""s decision cycle can comprise three sequential decision rules: (1) selection of a current role based on the evaluation of the current status data, (2) selection of a specific subset of the current data, and (3) determination of the next heading using the selected data. Variations of the decision rules produce different versions of alpha and beta behaviors that lead to different collective behavior properties.
Advantages and novel features will become apparent to those skilled in the art upon examination of the following description or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.