The invention relates to a processing method for using an error back-propagation algorithm in a layered neural network device, and to the network system device architecture for implementing the method itself.
The invention is used for solving problems relating to classification, pattern recognition, character recognition, speech signal processing, image processing, data compression etc.
Neurons are non-linear elementary active elements which are interconnected in a very dense network. Two types of network are considered:
fully connected networks which are referred to as Hopfield networks, PA1 layered networks in which the neurons are grouped in successive layers, each neuron being connected to all neurons of the next layer, the information passing from the input layer to the next layers (hidden layers) until it reaches the output layer. PA1 local training where the modification of a synaptic coefficient C.sub.ij linking the input neuron i to the output neuron j depends only on the information in the neurons i and j, PA1 non-local training where it depends on information present throughout the network. The latter training method uses, for example, the error back-propagation algorithm in a layered neural network architecture; this is the subject of the present invention. PA1 in the first resolving apparatus: initialisation of the synaptic coefficients C.sub.ij in the synaptic coefficient memories of the entire group of resolving processors, PA1 in the training apparatus: initialisation of the transposed matrix T.sub.ji of the matrix C.sub.ij in the synaptic coefficient memories of the entire group of training processors. PA1 In the resolving apparatus: PA1 in the resolving apparatus: PA1 in the host computer: PA1 in the training apparatus: PA1 in the training apparatus: PA1 in the-host computer: PA1 in the resolving apparatus: PA1 in the training apparatus:
These systems are capable of being trained by examples or of organising themselves. The very long calculation times in a sequential computer can be substantially reduced by performing the operations involved in the training or resolving process in parallel.
The training algorithms can be subdivided into two categories:
A method and a network of this kind are known from the document "A VLSI Architecture for feedforward networks with integral back-propagation" J. J. Paulos and P. W. Hollis, Neural Networks, No. 1, supplement 1, p. 399, 1988. This is an analog/digital circuit which comprises two sub-networks, the resolving step being performed in the first sub-network while the back-propagation step takes place in the second sub-network. It also comprises a calculation unit (ALU) for the updating of coefficients. The updating of coefficients takes place sequentially.
Thus, the problem faced by the invention is to increase the processing speed of such a network apparatus while limiting data transport.