The present invention is a system for natural language understanding which includes: a grammar processing method and a semantic processing method which converts natural language into previously stored experience and knowledge, natural language understanding based upon the previously stored experience and knowledge, a storage structure for storing experience and knowledge in a form which is convertible to and from natural language, and a method to add experience and knowledge from natural language input including problem solving.
The following references to prior art are made:                1. Bates, M. 1978. “The Theory and Practice of Augmented Transition Network Grammars”, L. Bolc (ed), NATURAL LANGUAGE COMMUNICATION WITH COMPUTERS. New York: Springer        2. Cook, W. 1979. CASE GRAMMAR: DEVELOPMENT OF THE MATRIX MODEL. Washington D.C.: Georgetown University Press        3. Dyer, M. 1983. IN-DEPTH UNDERSTANDING. Cambridge, Mass.: MIT Press        4. Earley, J. 1970. “An Efficient Context-Free Parsing Algorithm”. COMMUNICATIONS OF THE ACM, Vol. 13, pp. 94-102.        5. Fillmore, C. 1968. “The Case for Case”, in Bach, E., and Harms, R. (Eds), UNIVERSALS IN LINGUISTIC THEORY. New York: Holt, Rinehart, and Winston.        6. Guha, R. V., and Lenat, D. B. 1990. “Cyc: A Mid-Term Report”. AI Magazine, Vol. 11, No. 3, pp. 32-59.        7. Hendrix, G. G., Sacerdoti, E. D., Sagalowicz, D., and Slocum, J. 1978. “Developing a Natural Language Interface to Complex Data”. ACM TRANSACTIONS ON DATABASE SYSTEMS Vol. 3, pp. 105-147.        8. Hirst, G. 1987. SEMANTIC INTERPRETATION AND RESOLUTION OF AMBIGUITY. Cambridge, England: Cambridge University Press.        9. Hutchins, S. 1991. “System and Method for Natural Language Parsing by Initiating Processing Prior to Entry of Complete Sentences”, U.S. Pat. No. 4,994,966.        10. Lebowitz, M. 1988. “The Use of Memory in Text Processing”. COMMUNICATIONS OF THE ACM, Vol. 31, pp. 1483-1502.        11. Kolodner, J. 1984. RETRIEVAL AND ORGANIZATIONAL STRATEGIES IN CONCEPT MEMORY. Hillsdale, N.J.: Lawrence Earlbaum        12. Kolodner, J. 1988. “Retrieving Events from a Case Memory: A Parallel Implementation”. Proceedings of the DARPA Workshop on Case-Based Reasoning, pp. 233-249. San Mateo, Calif.: Morgan Kaufmann.        13. Loatman, R., Post, D., Yang, C., and Hermansen, J. 1990. “Natural Language Understanding System”, U.S. Pat. No. 4,914,590.        14. Madron, T., “Extracting Words form Natural Language Text”, AI EXPERT, Vol. 4, No. 4, pp. 30-35.        15. Quirk, R., Greenbaum, S., Leech, G., and Svartvik, J. 1985. A COMPREHENSIVE GRAMMAR OF THE ENGLISH LANGUAGE. New York: Longman.        16. Sager, N. 1981. NATURAL LANGUAGE INFORMATION PROCESSING: A COMPUTER GRAMMAR OF ENGLISH AND ITS APPLICATIONS. Reading, Mass.: Addison-Wesley.        17. Schank, R., and Abelson, R. 1977. SCRIPTS, PLANS, GOALS, AND UNDERSTANDING. Hillsdale, N.J.: Lawrence Earlbaum        18. Schank, R., and Riesbeck, C. (ed), 1981. INSIDE COMPUTER UNDERSTANDING: FIVE PROGRAMS PLUS FIVE MINIATURES. Hillsdale, N.J.: Lawrence Earlbaum        19. Schank, R. 1982. DYNAMIC MEMORY: A THEORY OF LEARNING IN COMPUTERS AND PEOPLE. Cambridge, England: Cambridge University Press.        20. Slade, S., 1991. “Case-Based Reasoning: A Research Paradigm”, AI MAGAZINE, (American Association of Artificial Intelligence) Vol. 12, No. 1, pp. 42-55.        21. Wilks, V., Huang, X., Fass. D., 1985. “Syntax, Preference, and Right Attachment”, Proceedings of the Ninth IJCAI.        22. Winograd, T. 1983. LANGUAGE AS A COGNITIVE PROCESS. VOL. 1: SYNTAX. Reading, Mass.: Addison-Wesley.        23. Woods, W. 1970. “Transition Network Grammars for Natural Language Analysis”. COMMUNICATIONS OF THE ACM, Vol. 13, No. 10, pp. 591-606.        
Previous work utilizing natural language processing has been in a few application areas: data base interfaces, translation, and understanding. The data base interfaces and translation work are similar in that the natural language input serves as a selector of an alternate representation. The natural language understanding work has been to classify natural language input sentences into possible defined categories for limited domains of categorization without any processing to determine if the categorization is consistent with natural language input sentences of the conversation. Other natural language processing work has expanded the limited categorization to fill in certain types of unstated information in a conversation for a limited situation and has included the capability of limited question answering about a conversation which has been categorized by this natural language processing. Still other natural language understanding work has stored and retrieved representations of specific natural language sentences, but this work lacks the capability of combining multiple natural language sentences into a representation of experience and knowledge.
The following describes the main references from the prior art. Discussion of various syntax processing methods is thoroughly described in Bates 1978, Sager 1981, Winograd 1983, and Hutchins 1991. Hutchins describes an efficient parser for detecting grammatical errors in natural language text. However, none of these parser descriptions utilize a grammar specification for both parsing incoming natural language and for forming natural language output. Quirk et al 1985 provide a thorough description of English grammar especially including the function of certain words such as pronouns, prepositions, conjunctions, interjections and other function words, prefixes and suffixes. Quirk et al also provides a detailed description of ellipsis, tense with related aspects, and clause formation and placement. However, this grammar description does not include a method for representing natural language nor does it include a method for selecting word senses of natural language words. Case frames are described in Fillmore 1968 and refined in Cook 1979. A method and apparatus for understanding natural language in the sense of selecting case frames which represent natural language text is disclosed in Loatman et al 1990. Case frames are a coarse categorization of natural language. Case frames lack the capability to represent the knowledge and experience implied from natural language in that case frames: have no representation for the implications of a case frame, have no representation for a process to realize the case frame, and have no capability to determine if the selected case frame is consistent with other case frames from the same natural language conversation. A limited representation of natural language is described in Schank 1977 and 1982. An instantiation of this representation is used to understand stories in terms of this limited representation, to match limited types of general experience, and answer limited questions about the understood story in Dyer 1983. A type of memory organization for storing and retrieving specific experience gained from understanding natural language using a type of Schank's limited natural language representation is described in Kolodner 1984. Guha et al describes a memory system which stores knowledge in a two level data structure which are redundant. Each data structure is related to first order predicate calculus. This memory system heavily relies upon axioms which complicates the accessing of experience and knowledge related to a natural language conversation. Another complication is that this memory system utilizes natural language processing for a translation of natural language input to access the data structure related to first order predicate calculus. This memory system is not specifically designed for selecting word senses of natural language words.
This invention builds on the previous natural language understanding work and significantly expands the capabilities of the previous work. One expansion is to upgrade parsers: to efficiently handle ellipsis grammar and coordination grammar for understanding natural language; and to efficiently handle both parsing of incoming natural language and generation of outgoing natural language with the same syntax grammar data structures. Another expansion is to represent function words as functions. Function words include: certain adjectives, certain adverbs, pronouns, prepositions, and conjunctions. Function words have a wide range of processes which represent them. These processes define the function words, and are described in more detail in this section, and in the greatest detail in the preferred embodiment of this invention. Another expansion is to process morphological words, words with prefixes or suffixes, i.e., affixes. A Morphological word is processed into the phrase or clause or word senses and functions which represents the morphological word. Another expansion is to perform ellipsis processing to replace ellipted words, i.e., left out words, and then to perform processing which determines if the replaced words are consistent with the context of the conversation and stored experience and knowledge. Morphological words and ellipsis can be selectively utilized for text generated for outgoing communication from the invention.
Another expansion is to represent all non-function words with a meaning in terms of states and their values. An additional expansion is to assign the meaning of such words a word sense number. A word sense number is analogous to an address to a dictionary definition. However, the definition associated with a word sense number is in a form which allows: selecting a consistent and plausible definition, and hence its associated word sense number, from natural language; storing all that is known for the definition and all that is known to be related to the definition by realizing the definition with a state representation which is in terms of states, their values, and/or their relations; and structuring the definition and its associated word sense number for accessing the range of generality of what is known for the definition and of what is known to be related to the definition.
Another expansion is to combine the state representation of a natural language input, purposes, and the context of the conversation or the context of the situation into a three dimensional address which selects stored experience or knowledge in a memory of knowledge and experience. A purpose includes all related experience or knowledge such as: information content (information about an experience such as advantages), an activity (a set of actions), a plan, an intention, a causal path (a set of experiences related by cause), a result path (a set of actions related by accomplishing a result), or a goal. In general, a purpose has a purpose relation which is any concept which labels one clause or more than one related clause. The knowledge and experience in this memory are composed of data which represent natural language words, phrases, clauses, and groups of clauses. Each dimension can be assigned between a general value (unassigned) to a specific value (completely assigned). This range of dimension values selects experience or knowledge in a range of all that is stored for unspecified dimensions to all that meets a partial specification up to a specific experience or knowledge for a completely assigned specification. This range of specificness allows for natural language input to be understood in terms of previously stored experience and knowledge, and this further allows the natural language input to be then assigned a measure of plausibility and expectedness based upon previously stored experience and knowledge. Hence, an interpretation of a natural language input can be judged and reinterpreted when plausibility or expectedness criteria are not met. A particular application of the invention may make every plausible interpretation of a natural language input, and then the application can select the most plausible for example. Also, the accessing of experience and knowledge provides the capability to determine when new experience or knowledge is presented to the invention. This capability provides the first step in acquiring and understanding new experience and knowledge. Another aspect of the accessing of experience and knowledge is that when the invention encounters ambiguity or contradiction, the invention can generate a clarifying question for output. An application can also utilize the stored experience and knowledge related to its application to select a communication for output to incoming natural language statements to achieve the goals of the application. In general, an application can generate a communication for output.