The invention relates to neural network architecture, and more precisely, to the structure and operation of a node of such a network, the node being termed a neural processor element or, alternatively, a neuron.
Neural networks are models of the gross structure of the human brain, and represent a collection of nerve cells, or neurons, each connected to a plurality of other neurons from which it receives stimuli in the form of inputs and feedback, and to which it sends stimuli. Some interneuron connections are strong, others are weak. The human brain accepts inputs and generates responses to them, partly in accordance with its genetically programmed structure, but primarily by way of learning and organizing itself in reaction to inputs. Such self-organization marks a significant difference between the brain and an electronic processor, which operates only as permitted by algorithmic prescriptions.
Neural networks comprise multi-dimensional matrices made up of simple computational units (neurons) operating in parallel, that replicate some of the computational features of the human brain. A neuron (neuron.sub.x) of a network of m neurons, is a unit that signals its state S.sub.x by the condition of a signal provided at its output, also termed its "axon". Such a signal may have a digital form with signal presence indicated by the "on" state of a digital pulse and absence by the "off" state of a pulse. Such signaling is conventional for many applications other than neural processing.
The state of the neuron n.sub.x can be altered by direct stimulation of the neuron from outside the network, as well as by contributions from other neurons in the network. The contribution from another neuron n.sub.y is weighted by an interneural synaptic weight E.sub.xy. The state of the neuron n.sub.x is given by: ##EQU1## Where E.sub.xy is the synaptic weight for the signal S.sub.y on the axon of the yth neuron. The function F(n.sub.x) describes the range of S.sub.x and the smoothness with which the neuron moves between the "off" and "on" state. The synaptic weight E.sub.xy tends to turn the neuron either on or off. A neural network learns by altering the E functions, and recalls or computes, ideally, by recursive processing which is asynchronous.
Those skilled in the art will realize that as the number of neurons proliferates, and as their interconnections multiply, a huge computational burden develops in the need to update or alter the synaptic weights in order to permit the network to learn. This burden has lead to the simplification of neural networks into realizable digital models which operate synchronously and according to an algorithm. Since the ideal which is being simulated is an asynchronous processor, utilization of synchronous digital models impairs the attainment of ideal operation. For example, a synchronous digital model of equation (1) would limit the S.sub.y variable by using typical voltage thresholds, and would perform the recursive updating of synaptic weight by a synchronously-clocked, algorithmic digital architecture. Such a structure will severely limit the number of realizable transfer functions.