1. Field of the Invention
This invention pertains to the field of pattern recognizers or artificial neural networks capable of supervised or unsupervised learning. More particularly, it pertains to such networks having neurons with digital weighting.
2. Description of the Prior Art
The general theory and operation of artificial neural networks (ANN) together with their existing and potential uses, construction, and methods and arrangements for learning and teaching have been extensively described in publicly available literature and need not be repeated herein However, U.S. Pat. No. 4,951,239, which issued Aug. 21, 1990, and U.S. Pat. No. 5,150,450, which issued Dec. 24 1991, describe aspects of artificial neural networks useful as a background of the present invention. Accordingly, these patents of which one of the present coinventors is also a coinventor, are incorporated herein by reference.
For the purposes of the present invention it is only necessary to realize that an ANN has a plurality of "neurons" each having a plurality of "synapses" individually receiving input signals to the network. Each synapse weights its received input signal by a factor, which may be fabricated into the network, may be loaded later as part of a predetermined set, or may be "learned". The synapses generate weighted signals which are summed by a common portion of the neuron, often termed the "neuron body" to generate a sum signal which, typically, is output by the neuron body after modification by a sigmoid or other "activation function" which is not involved in the present invention.
For practical use an ANN has on the order of at least a hundred neurons each with on the order of a hundred or more synapses. It is, therefore, apparent that even with a very large scale integrated (VLSI) circuit it is important that the elements of each body and, especially, each synapse be of simple, compact, and regular configuration.
Prior art artificial neural network neurons have been implemented with a variety of weighting arrangements which, while generally effective, have various deficiencies such as lack of resolution, particularly over time and with repeated changes. This deficiency is avoidable by the use of synapses with digital weighting. However, since resolution to one part in 256--the integer two to the eighth power--or more is typically necessary for practical use of an ANN, it is apparent that providing such weighting at each synapse, as by field effect transistors (FET) having different widths corresponding to powers of two or some other number, requires elements of different sizes with the more significant digits having elements on the order of several hundred times larger than corresponding elements for the least significant digit. The use of such large elements and of such varying sizes in the synapses would result in impractically large circuits.
With VLSI circuits, temperature and fabrication variations across a chip result in different characteristics for transistors at different portions of the chip. These variations, as well as noise from adjacent elements, can easily result in different synapses of even the same neuron having weight differences exceeding the necessary resolution. While such differences might be avoided by techniques such as the use of differential signals, insofar as known to applicants no circuits exist which use these techniques and yet provide the necessary resolution, compactness, dependability, and simplicity provided by the present invention for practical implementation of an ANN.
Since the synapse weights of an ANN may, for different applications, be unmodifiable after initial fabrication in a VLSI or other circuit; be generated after fabrication but not be readily changed after generation; or be readily changed, as by switching, when in use and as required in learning, it is essential that circuits providing such weights be adapted to not only provide the necessary resolution, temperature and fabrication variation immunity, and configuration for VLSI or other implementation; but be adapted to constructions providing the requisite fixed or modifiable weights. Insofar as known to applicants there exist no neuron circuits providing these features and the advantages provided by the present invention and also usable in a practical artificial neural network.