Interest in the information processing capabilities of the brain has been ongoing since the late 1800s. It is only recently, however, that artificial neuron based networks have gained the capabilities to be truly useful. For example, artificial neural networks are finding uses in vision and speech recognition systems, signal processing, and robotics.
Research and development of neural networks developed from attempts to model the operations and functions of the brain. Therefore, much of the terminology has biological origins.
The basic component of a neural network is a neuron. A neuron can be thought of as a weighted summer. The neuron has a number of inputs. Each input is multiplied by a weight value and the products are added. The weights may be positive (excitatory) or negative (inhibitory). The output of the neuron is a function of the sum of the products.
One neural network is known as the single layer network. Single layer networks are useful for solving linearly separable problems. One application of the single layer network is pattern recognition.
In the simplest case, each neuron is designed to identify a specific pattern in the input signals. Each neuron of the network has the same inputs. The output of each neuron is either a "hit" or a "miss". The sum of the products for each neuron is compared to a threshold value. If a neuron's sum is greater than the threshold value, than it has recognized its pattern and signals a "hit". The number of outputs of the network equals the number of patterns to be recognized and, therefore, the number of neurons. Only one neuron will signal a "hit" at a time (when its pattern appears on the inputs).
In a more complex case, the output of the neuron is not a "hit" or "miss". Rather, it is a graded scale value indicating how close the input pattern is to the neuron's pattern.
For more difficult problems, a multilayered network is needed.
The weights of a neuron may be either fixed or adaptable. Adaptable weights make a neural network much more flexible. Adaptable weights are modified by learning laws or rules.
Most learning laws are based on associativity. Starting with a learning rule proposed by Donald O. Hebb in 1949 (Hebb's rule), learning theory has generally assumed that the essence of learning phenomena involved an association between two or more signals. In Hebb's rule, for instance, the weight associated with an input is increased if both the input line and the output line are concurrently active. There have been many variations on this theme but in one way or another, most of the neural learning rules derive from this basis.
More recently, other learning laws have looked at associative rules based on concurrent input signals. In this approach, the weight of one input is increased if that input and a designated neighboring input are both active within a restricted time window.
It is now generally accepted that in adaptive pattern recognition and classification, for example, it is the correlation between multiple input signals or input signals and output signals, respectively, which constitutes the adaptive mapping mechanism. Any system which would "learn" to recognize and/or categorize a pattern must encode these correlations. Hence, most, if not all, work has focused on associative-based mechanism.
However, each of these learning rules has proved limiting. Each rule works well, given a specific type or group of problems, but none of them, so far, has proved sufficiently general or adaptive to apply to the broad spectrum of problems in the field.
The word adaptive is the key. Adaptation or adaptive response (AR) is the process of altering the response of the network to meet the demands of the new conditions.
In a biological system, such alterations are not made upon the first experience of change. Rather, the change must occur frequently and/or over long time frames before the organism undergoes substantial alteration. Biological systems tend to be very conservative in this regard.
The development of muscle tissue in an athlete provides a good example of this aspect of adaptive response. Consider a would-be weightlifter who is just starting the sport.
After practicing for a week, he may be able to lift considerably more weight than he could at the beginning. Note that the nature of practice is periodic with long intervals (in comparison to the time of the practice itself) between episodes. Over the short run, the body maintains a memory of the physical demand placed on the muscles between episodes.
This memory improves with continuation of the practice. If the weightlifter continues for a week, he will notice that lifting the same weight at the end of that week will be somewhat easier than at the beginning.
If he continues to train for a month, he will find he can lift even more weight with the same apparent effort as was required for the lesser weight in the early days.
Continued regular training will lead to physical changes in the musculature. Muscle bundles will bulk out, as new tissue is built to help meet the continuing demand. A form of long-term memory ensues, in that, this new tissue will remain for some period, even if training is forgone for a while.
Why doesn't the body produce this new tissue at the first or even second episode of demand? The reason is that biological tissues have evolved to deal with the stochastic nature of the environment. They are inherently conservative, maintaining only the amount of response capability as is "normally" required to handle daily life. Only after repeated exposure to the new level of demand will the body respond by producing new muscle fiber.
What happens to the athlete who falls off in training? Up to a point it depends on how long he has been training. If he quits after only a week, then within another week or so he will have reverted to his capacity prior to training. If, however, he stops after six months of training, it may take many months for the new capacity to atrophy back to the pre-training level. In fact, should the athlete reinstitute a training program within several months of stopping, chances are he can regain his peak performance in a relatively short time compared to the initial process. This is due to the long-term memory effect of adaptive response.
In fact, the adaptive response of biological systems illustrates that learning is not strictly a correlation between two signals. Rather, a more general approach is that learning is based on the encoding of information inherent in the time-varying signal along a single input channel.
The present invention is directed at providing a processor as a building block of a neural network which is based on this more fundamental learning law.