The present invention relates to a pattern recognition method and system and in particular to the use of neural networks for use therewith.
The most popular current approach to pattern recognition has been to analyze an object or pattern into features and attempt to match those features against programmed feature-templates. The major difficulty with this approach for generalized object recognition is that no set of known universal features exist for all objects. Thus for each pattern family, someone has to determine the most efficient features and program them into a classifier.
The growing field of neural networks aims to build intelligent machines using computational methods inspired by the mechanisms of brain function. The field is attempting to solve problems that are either too difficult or impractical for present computer approaches. These problems can be described most generally as computation of pattern recognition and pattern production. Pattern recognition includes the recognition of objects that are seen, heard, or felt; for example, faces, speech and handwriting. Pattern production includes the coordination of many coupled joints and muscles; and the formation of maps of related goals; for example grasping, walking and speaking.
Neural networks deal with information very differently than conventional computers. It is not just that they have a parallel architecture. The most basic differences come from how information is represented, processed and stored.
The computer's basic building blocks are binary numbers. Computers work at their best when inputs are translated into this binary language so that information can be manipulated by binary logic and without any inherent interference from neighboring information. By contrast, information in neural networks is input and processed as patterns of signals that are distributed over discrete processing elements. Neighboring information interact strongly in setting contexts to the patterns of signals. Unlike Artificial Intelligence, there are no symbols to manipulate. Symbolic manipulation has difficulty with problems that require vaguely described patterns like the variations in handwritten characters. Neural networks are designed to work best with pattern information in some context such as that found in the geometric distributions of strokes in character patterns.
Although the architectures of computers and neural networks are very different, recent fast computer processors and graphic displays have made it possible to simulate neural networks.
The architectures of neural networks have a linear or planar arrangement of simple identical computational elements, called neurons, where each neuron can modify the relationship between its inputs and its output by some rule. The power of a given neural network comes from a combination of the geometry used for the connections, the operations used for the interaction between neurons and the learning rules used to modify the connection strengths.
Applicant has successfully applied neural network theory to adaptive sensory-motor control and has implemented it. The networks perceive space subjectively in a robot controller, by the relationship between a how a robot sees and where it moves in space. The neural networks learn from their own experience. They make their own mistakes and organize their own coordination, much like babies do.
The flexibility of a general pattern classifier will not only be of interest to optical character recognition but can also be transferred to speech recognition, object recognition and general signal processing where the signal information is often interrelated, incomplete and even partially contradictory. Commercial applications of neural pattern classifiers will open up new markets in automation of image analysis, product inspection, reconnaissance and interfacing between man and computer. It will eliminate the labor-intensive programming of template features for different families of patterns. This should result in not only higher performance object recognition but also in pattern classifiers that are more cost effective and thus more economically attractive and competitive.
There are a number of other research efforts and patents in neural pattern classification: They are based on concepts that include back-propagation (Rumelhart, Hinton and Williams, 1986), adaptive resonance theory (Carpenter and Grossberg, 1987), Hopfield nets (Hopfield and Tank, 1986) U.S. Pat. No. 4,660,166 and U.S. Pat. No. 4,719,591 and Nestor modules (Cooper and Elbaum, 1981) U.S. Pat. No. 4,326,259.