1. Technical Field
The invention relates in general to the field of computer processing systems and, in particular, to high speed neural network systems and fabrication techniques thereof.
2. Background Ark
Artificial neural networks are massively parallel neuron-type elements that are interconnected in a specific manner to provide, but not limited to, optical character recognition, pattern recognition, machine learning, process control and voice recognition. The most common structures in artificial neural network systems are networks of non-linear processing elements, where "nodes" are interconnected to a plurality of inputs through information processing channels or "weights." Each node can process multiple inputs and weights, and each node has one output signal. The networks can often have multiple layers, wherein layers subsequent to the first layer receive inputs from the outputs of the previous layer. The last layer in the network generally provides the output stimulus.
Neural networks can simulate, on a very basic level, the features of biological nervous systems. Many of the advantages of biological nervous systems include: the ability to generalize, adapt and deal with a wide degree of latitude in environments, operate in a massively parallel form to effectively function at real time rates, fault tolerance or the ability to deal with errors internal to the network and the ability to learn by example. Neural networks do require training before useful results can be obtained. However, in many applications, one time batch back-propagation training of a neural network is sufficient. Once trained, the resultant "weights" are stored and retrieved for later use in a non-training, testing mode or "forward mode" operation.
In most highly interconnected networks the number of weights increases rapidly in a non-linear fashion as the number of nodes and inputs increase linearly. For example, if the number of nodes increases linearly with the number of inputs within a single layer of a fully interconnected network, then the number of weights increase as the square of the number of inputs. More specifically, in a small network layer of say 10 inputs and 10 nodes which are fully interconnected, 100 weights would be employed. However, for 1000 inputs and 1000 nodes, the number of necessary weights would be prohibitively high at 1,000,000. This would not only require massive amounts of hardware to simulate but could be very complex and slow in the forward mode when emulated on a single processor system.
Tantamount to real time network operation in the forward mode (for example, in a pattern recognition system using a neural network) is the use of a very fast simplified hardware. This is often difficult to achieve since many pattern recognition applications require a large number of inputs and/or nodes. Many previously developed pattern recognition systems do not use a simplified and fast hardware system. Further, in the case of an extremely large number of inputs, a true parallel hardware manifestation of a network is virtually impossible. In many real time pattern recognition applications, speeds in excess of many billion interconnects per second are required for the neural network. However, a neural network that operates at these effective speeds is not currently available. In addition, many previously developed pattern recognition neural network systems have structures which do not lend themselves well to high speed pattern recognition in hardware implementation.
Thus, a need exists in the art for a neural network system which lends itself to implementation with a large number of inputs and nodes and which does not require an enormous number of weights and physical interconnections. In addition, the topology of the network must be such that not only can it be manifested in a realizable and finite amount of hardware, but it must also be capable of operation at very high speeds. The neural network system disclosed herein satisfies these requirements.