A. Conflicts in Telecommunications Networks
There are at least three ways conflicts can arise in a distributed system such as a telecommunication network. One way is that users in a telecommunications network may disagree on the particular form of a communication session to be established among them. For example, party A may have an unlisted number, whereas party B may want to see the number of the calling party before accepting any call. When party A calls party B a conflict arises. This kind of conflict is called a session conflict because it involves disagreement over how a communication session should be established. Another example of a session conflict is when party A to a communication session wants to include a third party C but a second party B to the communication session does not want to include the third party.
In general, session conflicts involve disagreements over whether to establish communication or what the nature of the communication will be.
A second kind of conflict arises over the use of resources external to the network. For example, if telecommunication user A is at home and user B is visiting A's home, calls forwarded from B's home to A's home may conflict in their use of resources with calls to A. This kind of conflict is called a station conflict because it involves equipment at a user station.
A third kind of conflict involves the use of scarce. network resources. For example, a conference call requires special equipment called a bridge that combines the signals from multiple sources into a single signal. If too many conference calls are attempted simultaneously the network will not be able to provide enough bridges for them. This kind of conflict is called a resource conflict.
Some session conflicts and station conflicts occur as a result of what is called a feature interaction. A feature interaction arises when one feature in a telecommunications network interferes with the expected operation of another feature.
B. Negotiation Among Cooperating Systems
The above-described conflicts which arise in a telecommunications network are examples of the type of conflicts which distributed systems experience over which activities to perform. Such conflicts arise as the individual entities in the distributed systems make incompatible decisions because they base their decisions on different information or because they try to achieve different goals. To resolve conflicts, individual entities in a distributed system need to interact, exchanging information and possibly changing their own goals or trying to change the goals of other systems. The resulting interactions constitute a negotiation process.
In some negotiation mechanisms, an entity is represented by an agent. As used herein, the term agent refers to a computer process which represents a corresponding entity in a negotiation.
Typically, one agent sends another agent information about the goals it tries to achieve and the alternative plans to achieve the goals that are acceptable. This information forms a collection of proposals from which the other agent gets to pick one that is acceptable to it.
In some negotiation domains, the proposals and counterproposals which form the objects of negotiation can be represented by fixed sets of (numerical) attributes. In such cases, evaluation of proposals and generation of counterproposals may be implemented through a relatively simple combination of functions on those attributes. Examples of such domains are negotiations over price and features of a new car and negotiations for scheduling meetings. Negotiating agents are used in domains in which the objects of negotiation cannot be represented through such fixed sets of attributes. In such domains, a counterproposal may contain not only different values of the same attributes present in the proposal it responds to, but also entirely different attributes. Potentially, there are many attributes to chose from for incorporation in a counterproposal. But only some of them will seem relevant or `reasonable` to a human observer. The negotiation method of the present invention involving the use of negotiating agents adds in such domains the following two current techniques for negotiation in DAI (Distributed Artificial Intelligence):
A method to evaluate proposals (determine acceptability or unacceptability). PA1 A method to decide on what is a `reasonable` counterproposal when a received proposal is not acceptable.
A wide variety of negotiating processes have been disclosed in the prior art related to distributed artificial intelligence. See, e.g., S. Cammarata, D. MacArthur, and R. Steeb, "Strategies of Cooperation in Distributed Problem Solving," In Proceedings IJCAI-83, pp. 767, 770, Karlsruhe, 1983; R. Clark, C. Grossher, and T. Radhakrishnan, "Consensus: A Planning Protocol for Cooperating Expert Systems," In Proceedings 11th Workshop on Distributed Artificial Intelligence, pp. 43, 58, Glen Arbor, Mich., February, 1992; S. E. Conry, K. Kuwabara, V. R. Lesser, and R. A. Meyer, "Multistage Negotiation for Distributed Constraint Satisfaction," IEEE Transactions on Systems, Man, and Cybernetic, 21(6):1462-1477, November/December 1991; R. Davis and R. G. Smith, "Negotiation as a Metaphor for Distributed Problem Solving," Artificial Intelligence, 20:63 109, 1983; E. II. Durfee and V. R. Lesser, "Negotiation Task Decomposition and Allocating Using Partial Global Planning," chapter 10, pp. 229-243, Pitman, London, 1989; S. Kraus, E. Ephrati, and D. Lehmann, "Negotiation in a Non-cooperative Environment," Journal of Experimental and Theoretical Artificial Intelligence, 1:255-281, 1991; A. Sathi and M. S. Fox, "Constraint-directed negotiation of Resource Reallocation," In L. Gasser and M. N. IIuhns, editors, "Distributed Artificial Intelligence, Volume II," Research Notes in Artificial Intelligence, chapter 8, pp. 163-193, Pitman, London, 1989; S. Sen and E. H. Durfee, "A Formal Analysis of Communication and Commitment in Distributed Meeting Scheduling," In Proceedings 11th Workshop on Distributed Artificial Intelligence, pp. 333-344, Glen Arbor, Mich., February 1992; K. P. Sycara, "Argumentation: Planning Other Agents' Plans," In Proceedings IJCAI-89, p. 517, 523, Detroit, Mich., 1989; F. von Martial, "Coordination by Negotiation Based on a Connection of Dialogue States With Actions," In Proceedings 11th Workshop on Distributed Artificial Intelligence, pp. 227-246, Glen Arbor, Mich., February 1992; R. Weihmayer and R. Brandau, "A Distributed AI Architecture for Customer Network Control," In Proceedings IEEE Global Telecommunications Conference (GLOBE-COM '90), pp. 656, 662, San Diego, Calif., 1992; G. Zlotkin and J. S. Rosenschein, "Cooperation and Conflict Resolution Via Negotiation Among Autonomous Agents in Noncooperative Domains," IEEE Transactions on Systems, Man, and Cybernetic, 21(6):1317-1324, November/December 1991.
Several of the negotiation mechanisms are based on a hierarchical representation of goals and alternative ways to achieve these goals. The goal hierarchy is used for finding a plan that achieves the goals of all involved agents but that does not involve conflicting activities.
The hierarchies in R. Clark, C. Grossner, and T. Radhakrishnan, "Consensus: A Planning Protocol for Cooperating Expert Systems," In Proceedings 11th Workshop on Distributed Artificial Intelligence, pp. 43, 58, Glen Arbor, Mich., February, 1992, are AND/OR/XOR trees, where the nodes are goals. Rather than using a specification of a goal as a proposal, entire particular hierarchies are used as proposals. This approach involves the unconditional disclosure of more information in each proposal about an agent's goals and options than in the present invention. However, the protocol will settle on a compromise in a fixed and limited number of steps.
In the Multistage negotiation protocol (see, S. E. Conry, K. Kuwabara, V. R. Lesser, and R. A. Meyer, "Multistage Negotiation for Distributed Constraint Satisfaction," IEEE Transactions on Systems, Man, and Cybernetic, 21(6):1462-1477, November/December 1991), collections of plan fragments, which are organized in a hierarchy, are used as proposals. Other agents receiving proposals either select suitable plan fragments from the proposal or send notification that no acceptable plan fragment was included in the proposal. The hierarchy is not used to reason about other agents' goals, which are in any case derived from a global goal that is shared by all the agents. The approach, therefore, is more suited for a distributed problem-solving system than for an environment of autonomous agents.
K. P. Sycara, "Argumentation: Planning Other Agents' Plans," In Proceedings IJCAI-89, p. 517, 523, Detroit, Mich., 1989 uses a hierarchy to determine which arguments to use to influence other agents' evaluation of proposals. This hierarchy represents alternative ways of achieving goals. It is not used to determine the goals of the agents involved or to perform the actual evaluation of proposals.
The hierarchy used in R. Weihmayer and R. Brandau, "A Distributed AI Architecture for Customer Network Control," In Proceedings IEEE Global Telecommunications Conference (GLOBE-COM '90), pp. 656, 662, San Diego, Calif., 1990, is a tree, which represents only abstraction relationships (alternatives). Moreover, this hierarchy is not used to reason about other agents' goals. This hierarchy is used as a representation of alternative plans achieving an agent's own goals and can be used to generate subsequent proposals. Additional information (such as cost of an alternative or availability of alternatives at a certain level of abstraction) is passed between the agents to direct and coordinate the search processes through the agents' hierarchies.
However, the negotiation methods described in the prior art have several shortcomings. In particular, the prior art negotiation methods generally require the agents for the various entities to exchange a lot of information about which alternatives for achieving a goal are acceptable and which are not. In some applications such as a telecommunications network, information about which alternatives are acceptable to an agent and which alternatives are not acceptable to an agent is usually restricted, either because the information is strategic information (used to find an agreement that is most advantageous to an agent) or because it is private information (e.g., information about policies a subscriber would rather keep private such as call screening lists).
In addition, the prior art negotiation techniques do not provide satisfactory methods for enabling receiving agents to evaluate received proposals to determine their acceptability or unacceptability and to generate a counterproposal in the event a received proposal is unacceptable.
In particular, the prior art negotiation techniques do not allow a receiving agent which receives an unacceptable proposal from a transmitting agent to infer the goal of the transmitting agent and generate for transmission back to the transmitting agent a counterproposal which realizes the inferred goal.
It is an object of the present invention to overcome these shortcomings of the prior art negotiation methods.