This invention relates to an analog-digital hybrid neural network realization of associative memories and collective computation systems based on the Hopfield model disclosed in U.S. Pat. No. 4,660,166.
The Hopfield model shown in FIG. 1 in its simplest form defines the behavior of a state vector V as a result of the synaptic interactions between its components I.sub.1 -I.sub.n using a matrix of row and column conductors, where the columns (or rows) are driven by neuron amplifiers A.sub.1 -A.sub.n with feedback from rows (or columns) through synapses at intersections between the rows and columns. Connections are selectively made through resistors R by switches SW to store synaptic information in the matrix. The value of each resistor R is preferably selected for the problem at hand, but may be made the same for every synaptic node thus created. Each amplifier drives the synaptic nodes (resistors) to which the amplifier output is connected by column conductors, and receives feedback from every synaptic node to which a row conductor is connected. Each amplifier thus acts as a neuron capable of feeding back to all other neurons through synaptic nodes connected to the neuron output by a resistor R. The final output of the neuron amplifiers is a set of voltage signals that represents a stored word which best matches an input word consisting of a set of signals I.sub.PROMPT 1 through I.sub.PROMPT N.
A neural network having a 32.times.32 matrix of synapses has been implemented for research purposes with electronic switches and resistors of equal value for all synaptic nodes, and using analog amplifiers with an inhibit input for control of its threshold. It would be desirable to expand the network to a much larger matrix, such as 1000.times.1000, but such a large network would require too many switches and resistors.
In the continuing research effort into associative memories and collective computation systems based on the Hopfield model, the need has arisen for a research network utilizing hardware that can offer higher operating speeds than is presently obtainable through computer simulation, and that can be easily scaled upwards, e.g., scaled up from 32 to 1000 or more neurons. Software simulations, while proving a concept and verifying the basic operating principles of the Hopfield model, suffer from intrinsic speed limitations as compared to the expected speed (cycle time) of neural networks embodied in hardware. Thus, in order to research the properties and requirements of neural network associative memories and computers, programmable, flexible, and easily expandable neural networks using state-of-the-art techniques and components are required.