Natural Language Processing (NLP) and Natural Language Understanding (NLU) involve using computer processing to extract meaningful information from natural language inputs such as human generated speech and text. One recent application of such technology is processing speech and/or text queries in multi-modal conversational dialog applications such as for mobile devices like smartphones.
FIG. 1 shows some example screen shots of one such conversational dialogue application for a mobile device, Dragon Go!, which processes speech query inputs and obtains simultaneous search results from a variety of top websites and content sources. Such conversational dialogue applications require adding a natural language understanding component to an existing web search algorithm in order to extract semantic meaning from the input queries. This can involve using approximate string matching to discover semantic template structures. One or more semantic meanings can be assigned to each semantic template. Parsing rules and classifier training samples can be generated and used to train NLU models that determine query interpretations (sometimes referred to as query intents).
In a typical conversational dialog application, there are several interconnected components:                the dialogue manager (DM), which decides what the next action should be after each user input,        the automatic speech recognition engine (ASR), which translates spoken utterances into sequences of text words,        the natural language understanding engine (NLU), which maps the words into semantic interpretations, or concepts, and        the client, typically the component which resides on a mobile device or embedded platform and deals with visual displays and touch input.        