The present invention generally relates to a neural network and more particularly, to a learning method of the neural network, which enables reduction of processing time for learning and prevention of excessive learning.
Recently, multilayered neural networks which learn by a learning method based on error back-propagation have been employed in fields of speech recognition, character recognition, etc. Conventionally, multilayered perceptron type neural networks have been generally used for such fields. The multilayered perceptron type neural networks are constituted by at least three layers, i.e. an input layer, an intermediate layer and an output layer.
When learning patterns and categories are input to the input layer, the above mentioned multilayered perceptron type neural network (referred to simply as a "neural network", hereinbelow) by itself learns to determine boundaries of the categories to be classified. Furthermore, in the neural network, weights of synapse connections for connecting units contained in one of the layers and units contained in the rest of the layers are automatically set and a configuration of the neural network is determined.
As a result, on the basis of the learned boundaries of the categories to be classified, this neural network which has finished learning classifies the input patterns into the categories to which the input patterns belong. At this time, the number of the units contained in the input layer is determined by the degree of the input patterns, for example, the learning patterns, while the number of the units contained in the output layer is determined by the number of the categories to be classified.
However, when a conventional neural network learns, the learning patterns are merely inputted to each unit of the input layer. Therefore, in the conventional neural network, a rather long processing time is required for learning by using a number of learning patterns, which is a great stumbling block to utilization of the neural network. Meanwhile, generally, as the large number of the inputted learning patterns is increased, accuracy of evaluation of the neural network is further improved. However, an increase in the number of the learning patterns also results in an increase in convergence of learning based on error back-propagation or causes excessive learning. Thus in the neural network which has learned by using such exceptional learning patterns, such a problem arises in that the capability for judging the categories degrades at the time of evaluation of input patterns different from the learning patterns.