1. Field of the Invention
The present invention relates to an arithmetic element coupling network.
2. Description of the Related Art
In recent years, a neural network has received much attention as an apparatus for learning and approximating desired input/output functions required in pattern recognition and control, an association memory, or means for obtaining approximated solutions for optimization problems at high speed.
Learning here means optimization of each parameter in an arithmetic element coupling network so as to cause a neural network having neuron elements coupled in predetermined input/output relationships to realize a desired input/output relationship. For example, as a general learning technique in a neural network, coupling coefficients between all the neuron elements are changed to minimize the square sum of errors between an actual output from the arithmetic element coupling network and a desired output with respect to a specific input supplied as learning data. By repeating this learning, the input/output relationship of the neural network comes close to the desired input/output relationship. As a result, the neural network can play a role as a function approximation apparatus.
According to a general conventional technique, when a neural network is applied as an association memory or means for obtaining approximated solutions for optimization problems, evaluation functions defined by these problems are embedded in an arithmetic element coupling network structure using neuron elements having monotone increasing input/output characteristics. State transition rules for monotonously decreasing the function values are supplied every time the state of the neuron element changes, thereby obtaining minimum solutions of the evaluation functions.
(1) To calculate an output from a neural network in response to one input information, a plurality of arithmetic operations using input/output functions of the neuron elements must be performed. To conventionally realize these using arithmetic elements, the number of arithmetic elements is set equal to the number of arithmetic operations. This indicates redundancy in circuit, resulting in a disadvantage in hardware design.
When an input/output relationship to be realized is complicated or approximation precision is to be improved, the number of neuron elements required for finally realizing a desired input/output relationship inevitably increases, thus posing a problem on hardware design.
(2) When an input/output relationship to be realized is complicated or approximation precision is to be improved, the number of neuron elements constituting an intermediate layer required for finally realizing a desired input/output relationship inevitably increases. Along with this increase, the number of coupling weighting coefficients between the neuron elements is known to greatly increase.
This leads to an increase in memory capacity for holding weighting coefficient values, an increase in the number of signal lines for changing these values during learning, and an increase in the number of multipliers for calculating the products of outputs from the neuron elements. In addition, an increase in calculation amount caused by changes during learning undesirably causes an increase in learning time.
(3) In any application of a neural network as a function approximation apparatus, an association memory, or means for solving optimization problems, a portion corresponding to the neuron element is constituted by a single arithmetic element such as a differential amplifier, and its input/output characteristic is defined by the electrical characteristics of the arithmetic element. For example, when a known sigmoid function which can be realized by a differential amplifier is used as an input/output function, this function has been theoretically and experimentally proved to provide effective solutions for many problems.
In a neural network for realizing desired input/output characteristics, however, unlike in the conventional case, the input/output characteristic of each neuron element need not be uniquely defined. This characteristic can be selected depending on problems to be solved by the neural network, and an improvement in the capability of the arithmetic element coupling network as a whole can be expected. Demand has arisen for flexibly changing an input/output characteristic.
For example, in a neural network (Sprecher, D. A., "On the structure of continuous functions of several variables", Transaction of the American Mathematical Society, 115, 340-355 (1965) as shown in FIG. 1, an arbitrary multi-variable continuous function using variables x.sub.1 to x.sub.n is known to be expressed upon selecting a specific continuous function .chi., a monotone increasing continuous function .phi., a real constant w.sub.i, and a positive constant .epsilon.. To cause this neural network to learn and approximate a desired input/output function relationship, the continuous function .chi. and the monotone increasing continuous function .phi. which correspond to the input/output function of the neuron element must be learned. It is, however, very difficult to learn these functions. For this reason, demand has arisen for confirming the number of parameters to be learned (i.e., limiting the number of parameters) in such a neural network to facilitate learning.
As described above, first, in a conventional neural network, i.e., an arithmetic element coupling network, the neuron elements having the same input/output relationship are arranged as all independent arithmetic elements. When an input/output relationship to be realized is complicated, or approximation precision is to be improved, the number of necessary arithmetic elements inevitably increases, thus posing a problem on hardware design.
Second, in a conventional arithmetic element coupling network, when an input/output relationship to be realized is complicated or approximation precision is to be improved, the number of neuron elements constituting an intermediate layer required for finally realizing a desired input/output relationship inevitably increases. Along with this increase, the number of coupling weighting coefficients between the neuron elements greatly increases. In addition, a long learning time is required.
Third, in a neural network for realizing desired input/output characteristics, these characteristics must be flexibly selected depending on problems to be solved by the neural network.
On the other hand, in an arithmetic element coupling network requiring learning of a predetermined input/output function itself of the arithmetic element, it is very difficult to learn this function. For this reason, demand has arisen for limiting the number of parameters to be learned to facilitate learning.