A biological neuron may be modeled as a processing element responsive to stimuli through weighted inputs known as synapses. The weighted stimuli are typically summed and processed through a particular non-linearity such as a sigmoid function associated with the neuron. That is, the output signal of the neuron may be represented as a summation of the products of the input signal vector and the synaptic weights processed through the sigmoid function. The output of the neuron is typically coupled to the synapses of other neurons forming an interconnection known as a neural network which possesses many desirable properties including the ability to learn and recognize information patterns in a parallel manner. The neural network may be taught a particular pattern and later be called upon to identify the pattern from a distorted facsimile of the same pattern.
Technologists have long studied the advantageous nature of the biological neuron in an attempt to emulate its behavior electronically. Many neural networks are implemented with analog circuitry wherein a plurality of analog input signals are simultaneously applied to each neuron and multiplied by an equal number of synaptic weights, the result of which is summed and processed through the non-linear function. Hence for every synapse there is a corresponding input terminal coupled for receiving the analog input signal and a physical multiplier for providing the product of the analog input signal and the synapses. The multipliers are thus physically mapped to an equal number of synapses, the latter of which may be provided by analog memory locations such as the floating gate of an MOS transistor. For example, in a 64-neuron implementation of an analog neural network, the prior art may use a 64-by-80 array of matching synapses and multipliers. Since multipliers and synapses typically require large areas, the physical size of the neuron network grows quickly as the number of neurons increases, even with very large scale integration design techniques. As few as 256 neurons could preclude the use of a single integrated circuit package because of the excessive area required for the synapses and multipliers. Practical neural networks often use thousands of neurons to perform a single useful function and hundreds of thousands for more complex activities. Thus, the conventional analog architecture for neural networks may have imposed an undesirable practical limit on the future growth of the art. A more efficient neural architecture is needed which is not hampered with the redundant nature of the physical mapping common in the analog architecture.
Further consider the large external pin count needed for neural networks processing analog input signals in parallel. The previous example of a 64-neuron integrated circuit package may use 200 or more pins when considering the terminals for power supplies and assorted control signals. The large number of pins is primarily driven by the physical mapping of a multiplier for each synapse, requiring a dedicated input terminal for each synapse and conductor coupled therebetween. As the technology advances and the number of neurons per integrated circuit grows, the external pin count will in all likelihood increase accordingly. Should the number of neurons increase to say 256, the pin count for the integrated circuit package may reach in excess of 300 pins which is unacceptable in most, if not all, conventional integrated circuit packaging technologies. Attempts at time multiplexing the analog input signals have proven very difficult in practice. Hence, analog versions of neural networks generally suffer from limitations imposed upon the number of neurons contained therein by the constraints on physical area of the integrated circuit and external pin count needed to support the potentially vast array of parallel analog input signals in a useful neural network.
Often it is desirable to re-program the synaptic weights and neural interconnection of the artificial neural network to solve a new problem and thereby make more efficient use of the resources. Unfortunately, the aforedescribed analog neural network also tends to be somewhat inflexible in terms of dynamically re-defining the weight of the synapses and interconnection of the neural structure, in that the weighting value stored as a charge on the floating gate of an MOS transistor can take several milliseconds to change. The floating gates of the MOS transistors are matched one to each synapse and typically programmed serially, thus, it may take several seconds to adjust all of the weights within the neural network. In electronic terms, several seconds is an extremely long time, too long for use in many voice and pattern recognition applications. Moreover, the physical mapping and hard-wire interconnects of the analog neural network are often predetermined and inflexible, making learning and behavioral modifications difficult. In addition, analog components are generally temperature dependent making such devices difficult to design with high resolution for the synaptic weights and multiplication operations. While analog neural networks are typically very fast, such architectures are constrained in size, flexibility and accuracy thereby developing a need in the art to pursue other architectures, such as a digital approach.
Hence, there is a need for an improved neural network using a digital architecture having a predetermined number of data input terminals irrespective of the number of synapses per neuron, wherein the digital architecture reduces the number of multipliers per neuron for providing more neurons per unit area while allowing the synaptic weights and neural interconnections to be dynamically re-assigned to solve other problems thereby providing more efficient use of the available resources.