Some manufacturing processes involve the selective choice of components for the assembly of a finished product. This selection may be directed at enhancing the cost, quality, or performance of such products, or if the components are expensive, at enhancing the process yield. The components are tested, either individually or as a lot, with the test data used for component selection or "kitting".
Because the kitting criteria may be very complex, a variety of modern techniques have evolved for aiding in the selection process. One technique is the expert system, which makes recommendations based on a collection of explicit rules gleaned from human "experts". Another technique is the neural network or net, which is progressively trained to recognize patterns in the selection process based on the actual outcome of the process.
While expert systems are well-known in the art and have established a place in industrial applications, neural nets are still new enough to deserve some explanation and justification.
The simulation of human neural patterns on a computer has been discussed for more than 30 years but practical applications remained limited until the recent development of better learning algorithms. These have made neural nets a powerful new tool for any application that involves pattern recognition.
The most common neural net configuration is the multi-layer feed forward, back propagation network. In such a network, each node in the network represents a "neuron", such as those found in the human brain. Each neuron contains a small amount of processing power. Used in parallel, they constitute the network. Like their physical counterparts, each neuron has an "activation" level, which depends on the amount of stimulation it receives from the neurons around it. Based on an "activation function", the neuron sends its resulting output to others around it.
In a feed forward network, each neuron is connected to all of the neurons in the layer before it, and also to the neurons in the next layer. However, connections move only in the forward direction. By changing the importance of each connection, it is possible to train the network to associate inputs with outputs. A recent algorithm, back propagation, is used to gradually shift the weights of the connections until the network is "trained".
A properly trained network has interesting properties. First, it is possible to store a large amount of information in relatively small number of neurons. This is because the neural net is actually a device for generalizing constraints. Irrelevant factors are quickly randomized as the network learns, leaving only those features which are truly important for making distinctions. If data items are contradictory, or partially dependent on each other, the neural net will have difficulty because no unique set of constraints exists which will optimize the weights.
Second, since the information is stored as part of the network structure, it is necessarily cryptic. The patterns are stored implicitly within the network as a whole, not locally as in a conventional data file. This puts them beyond the means of direct inspection; one may not ask the network why it came to a particular conclusion. The output of the network is the combined result of the training data, the network architecture, the activation function, and the learning algorithm. Therefore, information can not be addressed directly.
Third, while nonlocality of data may be perceived as a drawback, it is also a virtue because it is the key to enable a neural net to generalize. In most rule based systems, small differences in the input data can result in an incorrect answer. Neural nets, however, degrade gracefully. Inaccuracies in the input can still allow a neural net to come to the correct conclusion. Thus, recognition of items that are similar but slightly different is a strong point for this technique.
Neural nets have already found their way into a number of industrial applications. Many involve some form of image recognition, such as handwriting recognition. Neural nets are also used to help decrease impurity levels in chemical processing facilities. A neural net vision system is now used to recognize cancerous cells in PAP smears, and another is used to detect the nuclear signatures of explosive material in a bomb detection system. Other uses include process control and noise suppression in television receivers. These and many other industrial tasks depend, in one way or another, on pattern recognition. This requirement is a strong point of neural nets.
Even with many advantages, neural nets are not appropriate for all applications. Expert systems are typically the best choice for tasks with a well defined process or a written set of requirements. Multivariant statistical analysis is yet another tool which can be used for problems which lack well defined rules, but have a wealth of data.
Expert systems and neural nets when used alone are each inadequate for reliable kitting. Expert systems provide direct and comprehensible control over the contents of the system, but they are difficult to develop and depend on the reliability of the original expert. They are, however, easy to understand and maintain once constructed. Neural nets are easy to train, but it is difficult to understand the internal representation of their knowledge. Also, neural nets are only as good as their training data, which may be incomplete or contradictory.
Therefore, there is a need to improve the reliability of kitting procedures that may be met by combining an expert system with a neural net in one kitting system.