1. Field of the Invention
The present invention relates to a neural network circuit comprising network circuits made by the connection of input and output terminals of a plurality of neuron circuits, which serve as unit circuits.
2. Discussion of the Related Art
Neural network circuits use the nervous systems of animals as a model and are capable of pattern recognition processing, such as character recognition or voice recognition, optimization, robot control and the like, which were difficult for Neumann-type computers. In conventional Neumann-type computers, processing was conducted successively in accordance with a program, so that the calculation time was large. In contrast, in neural network circuits, the neuron circuits execute calculations in a parallel fashion, so that the speed of processing becomes extremely high. Furthermore, the functions of neural network circuits are realized by learning and the changing of the connections states between neurons. For this reason, they have the special characteristics of being able to realize functions in cases of problems which have processing procedures which are difficult to place in rule form, if learning materials are available. When such a circuit is operated while conducting normal learning, even if the functions which are desirable change over time based on changes in the environment, it is possible to construct a flexible system which will be capable of following such changes and the like. In addition, as the network is constructed by means of the connections of a plurality of identical neuron circuits, if breakdowns should occur in circuits, it is easy to conduct operations by simply replacing such circuits with other normally functioning circuits, so that it is possible to realize a high resistance to flaws in cases in which LSIs are used. The present invention is applicable to cases in which neural network circuits are constructed using LSIs, and thus relates to a method for the construction of neuron circuits which have small-scale circuitry and consume little electricity.
The neural network circuit utilizes neuron circuits, which correspond to nerve cells, as the units thereof. It is constructed by connecting of a number of these neuron circuits. FIG. 27 shows one neuron circuit. A neuron circuit receives signals from a plurality of input terminals and has weight coefficients corresponding to the various input signals. It changes the combined strength in response to the weight coefficients and calculates the difference from the output, adds all the results, and determines the input. The structure of the neural network circuits is determined by means of connections of the neuron circuits. However, the simplest structure is a two-layer neural network structure, such as that shown in FIG. 28. The input terminal layer is called the input layer or the first layer, while the neuron circuit layer is called the second layer or the output layer. The signals from the various input terminals are supplied in parallel to all the neuron circuits and the neuron circuits are able to process the input signals in parallel. Processing is realized by having specified neuron circuits react and recognize input signals when the input signals are supplied.
However, in a two-layer neural network structure, the processing ability is not large, so that in general, a three-layer neural network structure, such as that shown in FIG. 29, is used. In the case of a three-layer structure, the second layer, or the neuron circuit layer, is termed the intermediate layer or the hidden layer, while the third-layer neuron circuit layer is called the output layer. This third layer utilizes the output of the neuron circuits of the second layer as input. There are cases in which the structure thereof is identical to that of the second layer and cases in which the structure is different. In the case of an identical structure, the output signals of the intermediate layer are inputted into all the neuron circuits of the output layer. However, a simple structure is possible in which the neuron circuits of the output layer only conduct OR logic, as shown in FIG. 29. In such a case, the outputs of the intermediate layer are each connected to only one neuron circuit of the output layer, so that the scale of the circuitry can be greatly reduced, while maintaining sufficient ability in cases in which these circuits are used for pattern recognition or the like. However, in order to deal with complex processing, it has been common to use a complex network structure in which the output of neuron circuits is fed back, a multilayered structure of three or more layers is used, or complex network circuits are combined.
A neuron circuit used in conventional neural network circuits is shown in FIG. 30. It has n weight coefficients (w1-wn) corresponding to the number of inputs n. The difference between an inputted signal and a weight coefficient is found by a subtraction circuit. This result is squared in a squaring circuit, and the calculation results of the various inputs and weight coefficients are all accumulated in an adding circuit. The output values are determined by the size of the square root of this result. The threshold value circuit which finally determines the output value has transmission characteristics such as those shown in FIGS. 31(a)-(c). (a) shows a step function pattern, (b) shows a polygonal line pattern, and (c) shows a sigmoid function pattern. The sigmoid function pattern of FIG. 31(c) has high flexibility. However, as the calculations thereof are complex, it is possible to use the simplified patterns of (a) and (b).
A network circuit having the three-layered structure of FIG. 29 and which is constructed using the neuron circuits of FIG. 30 has been used for pattern recognition and the like. If the number of neuron circuits in the intermediate layer of the structure of FIG. 29 is m and the number of input terminals of the input layer is n, only a number of weight coefficients n.times.n exists, and only this number of subtracting circuits and squaring circuits are necessary. As the number of objects of pattern recognition increases, the number of neurons m of the intermediate layer increases, so that it can be understood that an extremely large amount of subtracting circuits and squaring circuits will become necessary. In particular, in the case in which the neural network circuit is realized by digital circuits, the circuit scale of the squaring circuit which uses multiplication circuits becomes extremely large, so that the apparatus itself becomes extremely large, and a problem exists in that a plurality of neuron circuits cannot be placed on an LSI circuit. Furthermore, the circuit becomes large with respect to the amount of electricity consumed as well as with respect to the circuit scale of the squaring circuit, so that there becomes a problem in that an extremely large amount of electricity is consumed by the unit as a whole when an extremely large number of circuits are simultaneously operated.