1. Field of the Invention
The present invention is directed to a neural network optimization method suitable for traditional perceptron neural nodes as well as probability based or product operation based nodes as discussed in the related application and, more particularly, is directed to an optimization method which specifies how to determine the search space for the weights of the nodes and specifies how to search for the optimum weights within that search space.
2. Description of the Related Art
Many people have investigated the perceptron processing element because it has the essential features needed to build an adaptive reasoning system, i.e. a neural network. The problem which has plagued the neural network field throughout the last 30 years is that most attempts to instill a learning process into a neural network architecture have been successful only to a degree. The neural network learning algorithms are generally very slow in arriving at a solution. Most neural network learning algorithms have their roots in mathematical optimization and, more specifically, the steepest descent optimization procedure. As a result, the approaches utilized by the neural network community have been neither clever nor fast. What is needed is a different approach to neural network learning that does not involve significant amounts of mathematical operations to arrive at an optimal solution. This approach to learning must take advantage of the structure of the neural network optimization problem in order to achieve significant gains in computational speed.