Traditionally, software development systems and products are provided and targeted to a certain level of users. These products may employ several methodologies for delivering scalable products and adaptive architectures, such as with 3GL (Third Generation Language) and 4GL (Fourth Generation Language) systems. However, these systems have been limited in their ability to include and provide for different knowledge and skill levels of users and progressive specialization of the product to various application domains. That is, these systems have poor extensible capabilities and are not adaptive.
Other software development systems and products have attempted to utilize intelligent agents to perform specific tasks or functions within a network environment. However, uses of agents have generally been limited to specific groups of users at a uniform level of expertise. In addition, intelligent agent functions have mostly related to desktop/office system functionality—e.g., automatic spelling correctors, automatic email address selectors, etc.
One more recent intelligent agent system has involved the use of a tactical intelligent agent for decision making in the area of air combat by the present inventor in U.S. Pat. No. 6,360,193, entitled “Method and system for intelligent agent decision making for tactical aerial warfare,” to Alexander D. Stoyen, and assigned to 21st Century Systems, the assignee of the present application, the details of which are hereby incorporated by reference. In Dr. Stoyen's system, intelligent agents collaborate among themselves either in a homogeneous (i.e., among intelligent agents of the same type) or heterogeneous (i.e., among intelligent agents of different types operating in a common problem space) environment.
These intelligent agents also collaborate with human users, and accept real-time corrections to the “environment” (as it perceives it) from the user (either in delayed or in real-time fashion). These intelligent agents further take into consideration such factors as mental and physical state of a human user, including user degree of fatigue, stress, etc.
In Dr. Stoyen's AWACS Trainer Software (ATS), which is one exemplary application of the present invention, there is a tactical intelligent agent for decision making in the area of air combat. Other situations may also be used with the present invention as described below in more detail. The agent is tactical because it considers not only immediate certainties and near certainties (e.g., if a hostile fighter is not shot at it will shoot at us) but also longer-term possibilities (e.g., if the bulk of our fighters are committed early, they may not be available should an enemy strike force appear in the future). The agent is intelligent because it exhibits autonomous behavior and engages in human-like decision process. The agent assists in decision making in the area of air combat because the agent gives explicit advice to human AWACS Weapons Directors (WD) whose job it is to coordinate air combat. The agent is also capable of making independent decisions in the area of air combat, replacing a human WD.
ATS employs groups of collaborating intelligent agents for decision making. The agents are collaborating because not every agent has all the information regarding the problem at hand, and because global decisions are made that affect all agents and humans, on the basis of agents exchanging, debating and discussing information, and then making overall decisions. Thus for instance, agents assisting individual WDs exchange threat information and then coordinate their recommendations, such as what fighters to commit to what enemy assets, without resource collisions. That is, an agent A will not recommend to its WD A to borrow a fighter pair P from another WD (WD B) while WD B's agent (agent B) recommends to WD B to use the same fighter pair P to target another threat.
ATS supports collaboration among (a heterogeneous set of) intelligent agents and a combination of (a heterogeneous set of) intelligent agents and humans. The set of agents is heterogeneous because it includes role-playing agents (e.g., an agent that plays a WD) and adviser agents (e.g., an agent that recommends a particular fighter allocation to a WD) (as well as other agents). The set of humans is heterogeneous because it includes WDs and Senior WDs (different roles, a.k.a. SDs). Agents and humans collaborate because agents and humans jointly perform air combat tasks.
ATS also provides a feedback loop between an intelligent agent and a user. Agents and users (humans or other agents) exchange information while ATS is processing. As changes occur (e.g., new planes appear), agents and users exchange this information and agents, naturally adjust (as do the users). For instance, as a pair of fighters becomes available, an agent may recommend to the human WD how to assign this pair. WD's reaction results in the agent learning what happened and possibly how to (better) advise the WD in the future. In particular, the agent may also change its perception of the environment. For instance, a repeated rejection of a particular type of agent recommendation may result in the agent re-prioritizing objects and actions it perceives.
ATS provides intelligent agents representing multiple users (e.g., impersonating or assisting WDs, SDs, instructors). These agents collaborate, as already illustrated. However, the agents do not all perceive the environment the same way. For instance, an agent representing WD A may only be able to probe the status of the planes WD A controls. An agent representing another WD B may only be able to probe the status of the planes controlled by WD B. An agent representing an SD is able to probe the status of a plane controlled by any WD that reports to the SD. A strike WD may command a stealth bomber which does not show on AWACS radar, and thus even its position and movement are not visible to the other WDs.
In ATS, intelligent agents learn over time by accumulating knowledge about user' behavior, habits and psychological profiles. An agent may observe that a WD it advises tends to always accept recommendations to target advancing enemy with CAP'ed (engaged in Combat Air Patrol assignment) fighters but never with fighters on their way to tank (even though the agent may consider these fighters adequately fueled and otherwise ready for another dog-fight). The agent may then over time learn not to recommend the WD assign fighters on their way to tank to other tasks.
The intelligent agent may observe that a WD tends to press mouse buttons more times than it needed, to accept a recommendation. This conclusion may lead the agent to believe that a WD is overly stressed out and tired. The agent may then recommend to the SD's advising agent to recommend that SD consider rotating this WD out. Perhaps as a compromise, the two agents and the two humans (the WD and the SD) may then decide that the best course of action is for the WD to continue for a while but that no fighters be borrowed for other tasks from this WD, and that after the next air combat engagement, the WD be rotated out anyway.
In addition, multiple intelligent agents and humans may be involved in the ATS decision making process, and differences in opinion as to what constitutes the best course of action may result. The reasons for the differences include the following: non-uniform availability of information (e.g., a particular agent may be privy to detailed information on the planes that belong to its WD only), strategy preferences (e.g., a particular WD may be very risk-averse compared to others), and one group's considerations vs. another group's considerations (e.g., a WD (and its agent) may not wish to loose a pair of fighters; on the other hand, from the point of view of the entire WD team, it may be acceptable to send that same pair of fighters to divert enemy air defenses (at a great risk to themselves) away from a strike package). Given the differences in opinion, the ATS agents exchange opinions and debate options, among themselves and with humans. Standard resolution protocols may be used to ensure that an overall decision is reached after a final amount of such exchanges. Examples include standard neural networks, standard ether net collision resolution, standard packet collision resolution, standard two-phase commit in databases, and other standard negotiating techniques. Also, in an operational or training setting, the SD (or other human in charge) can ultimately force a decision, even in disagreement with agents (or humans).
While the ATS system and the developments described in U.S. Pat. No. 6,360,193, entitled “Method and system for intelligent agent decision making for tactical aerial warfare,” achieved its goals for its specific purpose, there has not been a functionality and a methodological solution available for the creation, management, modification and extension of software agent systems that span a hierarchically layered construct of both general and specific application domains. Further still, the traditional systems have not provided for coordination among agents in an integrated environment of software development for progressive specialization. Agent Construction Toolkits have been traditionally limited to specific application domains, and do not span a broader range of general and specific domains of applications and/or functionality as in the present invention.