The present invention relates to an internal connection method for neural networks, having particular application to prediction tasks or to observation vector dimension reduction in, for example, the image processing and analysis fields. More generally, it can be applied to techniques employing multi-layer neural network modelling structures.
Over the last few years, a large number of neural network models have been proposed. Among these, the model that seems to be of most interest for industrial applications is the multi-layer Perceptron which is particularly discussed in an article by D. E. Rumelhart, G. E. Hinton and R. J. Williams "Learning internal representations by error back propagation" in the book "Parallel Distributed Processing" Cambridge: MIT Press, 1986. This structure is an assembly of individual units called "neurons" or "automata". The neurons are organized into layers, each layer receiving the outputs from the preceding layer at its inputs. A learning algorithm known as a gradient back-propagation algorithm has been proposed for this type of network. This algorithm is discussed in the abovementioned article.
Even though the multi-layer Perceptron's performance is satisfactory as regards classification tasks, the same does not apply to tasks involving prediction or, notably, space dimensional reduction. The multi-layer Perceptron performs badly in these fields because of its poor ability to handle non-linear problems.