1. Field of the Invention
The present invention generally relates to a neural network associative memory LSI (Large Scale Integrated circuit). More particularly, the present invention relates to a synapse element with a learning function which can be highly integrated, and to a semiconductor integrated circuit device including the synapse element.
2. Description of the Background Art
Recently, research and development have been vigorously carried out on a brain-type computer based on the manner of processing information by the brain. Intuitive information processing such as pattern recognition, context association, combinatorial optimization or the like, which is easily carried out by the brain-type computer, is a technique indispensable for smooth communication between information processing machines and human beings. Such a technique is expected to make a breakthrough for the machines to be fitted into and utilized in the society without sense of discomfort. For the practical utilization of the brain-type computer, development of the dedicated hardware is indispensable. In particular, development of the neural network associative memory LSI as a main component of the brain-type computer is strongly required. For the practical utilization of the neural network associative memory LSI, the ultimate task is to realize the high integration of the synapse circuits with the learning functions required to store the associative patterns fast and freely. In the neural network forming the associative memory, a great number of synapses, approximately proportional to the square of the number of the neurons, are required. Accordingly, high integration of the synapses is most effective for the high integration of the associative memories. In addition, in order for the associative memory to be able to store the associative patterns fast and freely, the synapse needs to have the learning function.
As the prior art related to the associative memory neural network LSI with the learning function, a brief description will be made about the techniques disclosed in Japanese Patent Laying-Open No. 03-80379, and U.S. Pat. Nos. 5,148,514, and 5,293,457 (inventor: Yutaka Arima et al.). FIGS. 7A and 7B show exemplary block diagrams of the conventional neural network LSI. In each of FIGS. 7A and 7B, a line of neuron circuits is disposed in each of 4 sides of a chip, and the synapses are disposed and arranged, in a matrix, in almost all the other central regions of the chip. FIG. 7A illustrates two self-connected neural networks, while FIG. 7B shows one interconnected neural network. These arrangements and the interconnection interconnecting between a neuron circuit and a synapse circuit as illustrated in FIGS. 8A and 8B enable the neural network for the associative memory to be configured effectively.
FIG. 9 shows an example of the synapse circuit with the learning function. In a synapse circuit 100 illustrated in FIG. 9, a synapse load value (Wij) is represented by the amount of charges accumulated at a capacitor C1. The amount of charges accumulated at capacitor C1 is corrected in accordance with a learning law (ΔWij=±ηSiSj, where η is a learning coefficient, and is updated in accordance with the pulse number applied to ACP+ and ACP−) by a load correction circuit 101 made from a charge pump circuit, and a learning control circuit 102 applying a correction signal thereto. Si and Sj respectively correspond to the output signals of the neurons i and j applying signals to this synapse. In this prior art example, since a symmetrical synapse coupling (Wij=Wji) is expected, two synapse coupling operational circuits 103 are mounted to synapse circuit 100 producing one synapse load value.
FIG. 10 shows an example of the neuron circuit. In a neuron circuit 110 illustrated in FIG. 10, currents of the output signals from the synapse are added together at a common node 111 (Kirchhoff adder) to convert the signal to a voltage, and the voltage is compared with a threshold (Vref) of the neuron at a comparator 112. Two selectors 113 and 114 selectively output either the output of comparator 112 or an educator data SR (T) in a register 116, in accordance with an attribute data SR (P) in a register 115 within the present neuron circuit and a learning control signal IselS.
In accordance with the above-described prior art, synapse circuits with the learning functions can be packaged in a relatively high integration. In fact, it is reported that the integration of 80,000 synapses and 400 neurons on one chip has been successfully achieved by using a 0.8 μm CMOS technology (Y. Arima, et al. “A Refreshable Analog VLSI Neural Network Chip with 400 Neurons and 40 K Synapses,” IEEE, Journal of Solid-State Circuits, vol. 27, No. 12, pp. 1854–1861, December, 1992.). Furthermore, by using this prior art technology together with the currently cutting-edge technology, which is a 0.15 μm-CMOS (Complementary Metal-Oxide Semiconductor), approximately 2 million synapses and approximately 2,000 neurons can be integrated on one chip. In this case, associative storage of approximately 300 patterns is possible. For the practical utilization, however, the storage capacity is not sufficient.