The present invention relates to neuron architecture for processing signal information, and more particularly, to neuron networks which process multiple signals on common busses.
A neuron or neural network includes a plurality of neuron units or neurons which are interconnected to process signal information. Each neuron unit is a processor which combines multiple inputs to which the processor applies different weights or connection strengths. The weights may be stored in a memory and adjusted in a desired manner.
A neural network can be utilized for pattern recognition of a character image, a voice pattern recognition, control of a robot in a machine control, applications for expert systems in knowledge processing, compression and decompression of images in signal processing and so forth. By connecting the neuron units in the network, the neuron units can perform superparallel processing and data processing with a technology in which a learning function can be achieved. Therefore, the neural network is expected to be used widely. At present, neural networks are executed by simulation software, using personal computers. However, as the size of the network increases, the software becomes more complex and the processing time increases. It is foreseeable that the neuron units could be made of hardware, but as the number of inputs and the size of the memory increase, the cost and complexity of such hardware increases significantly.
When a neural network is realized in the form of an integrated circuit, it becomes important to consider providing a system for interconnecting or linking respective neuron units or respective processing units with other units, providing a system for determining weights producing a large scale highly accurate circuit and with high speed processing. The problems in developing such integrated circuitry will be discussed in the following brief description of known neural networks.
FIG. 1 shows a conventional layer-type neural network. A unit 1A of the neuron units is connected by a connection or arc for a synapse connection. Units I1 to I5 are the input layer, units H1 to H10 are the hidden layer, and units O1 to O4 are the output layer. Each of the units I1 to I5 in the input layer are connected in common to each of the units H1 to H10. For example, unit I1 is connected to all units H1 to H10. As stated here, the neural network circuit is generally formed of several layers. A neural network learns through an error back propagation algorithm by changing the weights of the connections from the output layer to the input layer so that the error between the teacher signal and the output signal at the output layer is minimized. During learning, an appropriate value is provided as a weight in the connection, for example, and if the output value produced by the network is not the desired value of the object, the weight value is changed so as to decrease the error.
In the neural network, all the units in one layer are connected to each unit in the next layer and the strength of the connection is determined by changing the weights between the connected units. It is practically difficult to realize a large-scale neural network which requires a lot of units and many connections between units.
When a neuron unit is constructed using an operational amplifier in the respective processing block, it requires an offset voltage for proper operation when zero input voltage is applied, a small voltage .DELTA.V is produced as an output voltage. Therefore, a large-scale highly accurate neural network cannot be constructed. In the learning process using the neural network, it is necessary to change the weight of the synaptic connection. Therefore, a highly accurate neural network cannot be constructed using a voltage-control-type resistor. Considering this and the other discussed problems, learning and problem solving by a neural network has been often executed by simulation using a sequential computer. Thus, a large-scale neural network made by using hardware alone has not been realized.