It has been proposed for many years that certain functions of the nervous systems of animals could be emulated, at least in part, using electronic circuitry. Many of the most influential papers in this area are collected in Neurocomputing, edited by Anderson and Rosenfeld and published by the MIT Press, Cambridge, Mass., 1988. In recent years, considerable progress has been achieved in this area of research. One of the most promising areas of this investigation has been in the area of artificial neurons arranged in regular networks, typically referred to as neural networks. Representative examples of useful networks of such neurons are shown in U.S. Pat. Nos. 4,660,166 and 4,719,591. In general, each input or afferent "synapse" of each artificial neuron in these networks implements a characteristic linear or nonlinear transfer function between a corresponding input stimulus and the summing node of the neuron. In the most general form, the transfer function parameters of these synapses are selectively variable. An example of one such "programmable"synapse is shown in U.S. Pat. No. 4,782,460.
It is generally accepted that synapses are the smallest macroscopic working elements in a nervous system. This implies that the function of a neural network is intrinsically dictated by the synaptic behavior. One of the most widely accepted, and biologically accurate, description of synaptic behavior was given by Donald Hebb in The Organization of Behavior (1949). According to the "Hebbian" learning role, "when an axion [sic]of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased".
Although various types of transfer functions have been proposed for use in neural networks to model Hebbian synaptic behavior, such transfer functions typically relate a set of input/output pairs according to a plurality of "weighting factors". In this type of neural networks, a set of equations, i.e., a model, can be derived in terms of weighting factors for a network of a fixed topology. The procedure for determining the values of these factors is similar to the procedure for determining the values of the parameters of a statistical model. However, several problems exist in neural networks which employ such weighting factors. The most significant one is the lack of capability for incremental "learning" or adaptation of the weighting factors. As in the procedure for determining the values of the parameters of a given statistical model, all data, new as well as old, must be considered, essentially in a batch process, to recalculate the weighting factors. Thus, during the "learning" phase, the network cannot perform normal functions. Another major problem arises when the Hebbian learning rule is implemented by adopting the concept of "weight" and applied to such networks. Typically, application of the rule causes the weighting factors to increase monotonically, so that a persistent input pattern will tend to eventually overwhelm the network. If this were true in the biological system, we would all become so hypersensitive to persistent stimuli from such common items as clothes, eyeglasses, and background noises that we would go insane!. Other problems include the uncertainty and generally slow rate of convergence of the weighting factors with respect to a set of input/output pairs. Such problems render neural networks based on the concept of weighting factors not only unsupported in biology, but also limited in practical utility.