Computer program listings and Table appendices comprising duplicate copies of a compact disc, named xe2x80x9cDEJI 1000-5xe2x80x9d, accompany this application and are incorporated by reference. The appendices include the following files:
APPENDIX I.txt 59 Kbytes created Sep. 19, 2002
APPENDIX II.txt 7 Kbytes created Sep. 19, 2002
APPENDIX III.txt 3 Kbytes created Sep. 19, 2002
APPENDIX IV.txt 24 Kbytes created Sep. 19, 2002
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
1. Field of the Invention
The invention relates to user-machine interfaces, and more particularly, to software methods and techniques for implementing an agent-oriented architecture which is useful for user-machine interfaces.
2. References
The following documents are all incorporated by reference herein.
T. Kuhme, Adaptive Action Promptingxe2x80x94A complementary aid to support task-oriented interaction in explorative user interfaces. Report #GIT-GVU-93-19, Georgia Institute of Technology, Dept. of Computer Science, Graphics, Visualization, and Usability Center, 1993.
L. Balint, Adaptive Dynamic Menu System. Poster Abstracts HCI International ""89, Boston, September 18-22, 1989.
A. Cypher. Eager: Programming Repetitive Tasks By Example. Proc. CHI""91, pp. 33-39, 1991.
R. Beale, A. Wood, Agent-based interaction, Proceedings of HCI""94 Glasgow, 1995, pp. 239-245.
A. Wood, xe2x80x9cDesktop Agentsxe2x80x9d, School of Computer Science, University of Birmingham, B.Sc. Dissertation, 1991.
Clarke, Smyth, xe2x80x9cA Cooperative Computer Based on the Principles of Human Cooperationxe2x80x9d, International Journal of Man-Machine Studies 38, pp.3-22, 1993.
N. Eisenger, N. Elshiewy, MADMANxe2x80x94Multi-Agent Diary Manager, ESRC-92-7i (Economic and Social Resource Council) Internal Report, 1992.
T. Oren, G. Salomon, K. Kreitman, A. Don, xe2x80x9cGuides: Characterizing the Interfacexe2x80x9d, in The Art of Human-Computer Interface Design, Brenda Laurel (ed.), 1990 (pp.367-381).
F. Menczer, R. K. Belew, Adaptive Information Agents in Distributed Textual Environments, Proceedings of the Second International Conference on Autonomous Agents (Agents ""98), Minneapolis, Minn., May 1998.
P. Brazdil, M. Gams, S. Sian, L. Torgo, W. van de Velde, Learning in Distributed Systems and Multi-Agent Environments, http://www.ncc.up.pt/xcx9cltorgo/Papers/LDSME/LDSME-Contents.html (visited 1998).
B. Hodjat, M. Amamiya, The Self-organizing symbiotic agent, hhttp://www_al.is.kyushu-u.ac.jp/xcx9cbobby/1stpaper.htm, 1998.
P. R. Cohen, A. Cheyer, M. Wang, S. C. Baeg, OAA: An Open Agent Architecture, AAAI Spring Symposium, 1994, http://www.ai.sri.com/xcx9ccheyer/papers/aaai/adam-agent.html (visited 1998).
S. Franklin, A. Graesser, Is it an Agent or just a Program? A Taxonomy for Autonomous Agents, in: Proceedings of the Third International Workshop on Agents Theories, Architectures, and Languages, Springer-Verlag,1996, http://www.msci.memphis.edu/xcx9cFranklin/AgentProg.html (visited 1998).
B. Hayes-Roth, K. Pfleger, P. Lalanda, P. Morignot, M. Balabanovic, A domain-specific Software Architecture for adaptive intelligent systems, IEEE Transactions on Software Engineering, April 1995, pp. 288-301.
Y. Shoham, Agent-oriented Programming, Artificial Intelligence, Vol. 60, No. 1, pages 51-92, 1993.
M. R Genesereth, S. P. Ketchpel, Software Agents, Communications of the ACM, Vol. 37, No. 7, July 1994, pp. 48-53, 147.
A. Cheyer, L. Julia, Multimodal Maps: An Agent-based Approach, http:/www.ai.sri.com/xcx9ccheyer/papers/mmap/mmap.html, 1996.
T. Khedro, M. Genesereth, The federation architecture for interoperable agent-based concurrent engineering systems. In International Journal on Concurrent Engineering, Research and Applications, Vol. 2, pages 125-131, 1994.
P. Brazdil and S. Muggleton: xe2x80x9cLearning to Relate Terms in Multiple Agent Environmentxe2x80x9d, Proceedings of Machine Learningxe2x80x94EWSL-91, pp. 424-439, Springer-Verlag, 1991.
S. Cranefield, M. Purvis, An agent-based architecture for software tool coordination, in Proceedings of the Workshop on Theoretical and Practical Foundations of Intelligent Agents, Springer, 1996.
T. Finin, J. Weber, G. Wiederhold, M. Genesereth, R. Fritzson, D. McKay, J. McGuire, S. Shapiro, C. Beck, Specification of the KQML Agent-Communication Language, 1993 (hereinafter xe2x80x9cKQML 1993xe2x80x9d), http://www.cs.umbc.edu/kqml/kqmlspec/spec.html (visited 1998).
Yannis Labrou and Tim Finin, A Proposal for a new KQML Specification, TR CS-97-03, February 1997, Computer Science and Electrical Engineering Department, University of Maryland Baltimore County, http://www.cs.umbc.edu/xcx9cjklabrou/publications/tr9703.pdf.
R. R. Korfhage, Information Storage and Retrieval, John Wiley and Sons, June 1997.
M. Mitchell. An Introduction to Genetic Algorithms. MIT Press, 1996.
D. C. Smith, A. Cypher, J. Spohrer, KidSim: Programming Agents without a programming language, Communications of the ACM, Vol. 37, No. 7, pages 55-67, 1994.
3. Description of Related Art
Most human-machine interfaces in use today are relatively complicated and difficult to use. Frequently this is a consequence of the growing number of features to which the interface is expected to provide easy access.
Users usually have the following problems with current interfaces:
Prior to selecting an action, users have to consider whether the machine provides an appropriate action at all. It would therefore be desirable if the interface could provide feedback to the user.
It is difficult to access the actions users already know about. It would therefore be desirable if the user could freely express his or her needs without being bound to a limited set of conventions preset by the interface.
Users have to imagine what would be an appropriate action to proceed with in order to perform a certain task of the machine domain. It would therefore be desirable if the interface could guide users through the many options they may have at any stage of the interaction.
User interfaces that adapt their characteristics to those of the user are referred to as adaptive interfaces. These interactive software systems improve their ability to interact with a user based on partial experience with that user. The user""s decisions offer a ready source of training data to support learning. Every time the interface suggests some choice, the human either accepts that recommendation or rejects it, whether this feedback is explicit or simply reflected in the user""s behavior.
The following general features may be desirable in a user interface:
Natural Expression: The user should be able to express his or her intentions as freely and naturally as possible.
Optimum Interaction: Interaction should be limited to the situations in which the user is in doubt as to what she/he can do next or how she/he can do it, or the system is in doubt as to what the user intends to do next. Note here that lack of interaction or feedback from the system is not necessarily desirable. Interaction is considered optimum if it occurs where it is required, no more often and no less often.
Adaptability: Adaptability could be about the changing context of interaction or application, but more importantly, the system should be able to adapt to the user""s way of expressing her/his intentions. Two main issues that are taken into account in this regard are generalization and contradiction recovery.
Generalization: An adaptable system in its simplest form will learn only the instance that it has been taught (implicitly or explicitly). Generalization occurs when the system uses what it has learned to resolve problems it deems similar. The success and degree of generalization, therefore, depend on the precision of the similarity function and the threshold the system uses to distinguish between similar and dissimilar situations.
Contradiction: A system that generalizes may well over-generalize. The moment the system""s reaction based on a generalization is in a manner the user does not anticipate, the system has run into a contradiction. The resolution of this contradiction is an integral part of the learning and adaptability process.
Ease of change and upgrade: The system designer should easily be able to upgrade or change the system with minimum compromise to the adaptation the system has made to users. Change should preferably be possible even at run-time (i.e., on the fly).
Various attempts have been made to reduce the navigation effort in menu hierarchies:
Random access to menu items (e.g., key shortcuts).
Pointer setting strategies for pop-up menus provide for a faster access to a certain menu item, often the most recently or most frequently used.
Offering user assistance in selecting valid and appropriate items (e.g., grey-shading of disabled menu-items).
Action prompting according to previously selected objects (object-specific menus or dynamically exchanged control panels).
Reorganization of menus according to user-specific usage patterns.
Automating iterative patterns in interaction (for example, the Eager system, which is a Programming-By-Example system that anticipates which action the user is going to perform next).
Most human-computer interfaces today are programmed in standard sequential or object-oriented software. Another software paradigm exists, however, which has not heretofore been used effectively for human-machine interfaces. Under this paradigm, which is known generally as an agent-based software architecture, a given task is divided up into several sub-tasks and assigned to different xe2x80x9cagentsxe2x80x9d in the system. xe2x80x9cAgentsxe2x80x9d are communicating concurrent modules, each of which handles a part of the decision-making process. If the agents are capable of learning, they are referred to as adaptive agents.
Some examples of situations in which agent-based interaction have been used are as follows:
Agents can be used to allow the customized presentation of information. These agents preprocess the data and display it in a way that can be unique for each individual user.
Agents can act as tutors or guides, supplementing user""s knowledge with their own. These assist the current task by providing alternative views and additional relevant information.
Agents can be used for the adaptive search and retrieval of information.
One predominant approach to the use of Agents in user interaction has been to concentrate a large bulk of the interaction responsibilities in a single agent, thus reverting to a centralized architecture. Nevertheless many real world problems are best modeled using a set of cooperating intelligent systems. Our society, for example, consists of many interacting entities. If we are interested in modeling some aspects of our society, it would be desirable to structure our model in the same way. As another example, since data often originates at different physical locations, centralized solutions are often inapplicable or inconvenient. In addition, using a number of small simple adaptive agents instead of one large complicated one may simplify the process of solving a complicated problem. In other words, agents collectively exhibit emergent behavior, where the behavior of the agent population as a whole is greater than the sum of its parts.
The invention, roughly described, involves a computer-implemented method for processing a subject message, by a network of agents each of which has a view of its own domain of responsibility. An initiator agent which receives a user-input request and does not itself have a relevant interpretation policy, queries its downchain agents whether the queried agent considers such message, or part of such message, to be in its domain of responsibility. Each queried agent recursively determines whether it has an interpretation policy of its own that applies to the request, and if not, further queries its own further downchain neighboring agents. The further agents eventually respond to such further queries, thereby allowing the first-queried agents to respond to the initiator agent. The recursive invocation of this procedure ultimately determines a path, or a set of paths, through the network from the initiator agent to one or more leaf agents. The request is then transmitted down each given path, with each agent along the way taking any local action thereon and passing the message on to the next agent in the path. In the event of a contradiction, the network is often able to resolve many of such contradictions according to predetermined automatic algorithms. If it cannot resolve a contradiction automatically, it learns new interpretation policies necessary to interpret the subject message properly. Such learning preferably includes interaction with the user (but only to the extent necessary), and preferably localizes the learning as close to the correct leaf agent in the network as possible. Preferably, though, the learning takes place prior to the leaf agent itself.