Artificial neural networks consist of a set of elements that start out connected in a random pattern, and, based upon operational feedback, are molded into the pattern required to generate the desired results. Artificial neural networks are used in applications such as robotics, diagnosing, forecasting, image processing and pattern recognition.
Artificial neural networks, sometimes simply referred to as neural networks, are neurons or processing elements (PEs) grouped in input, hidden, and output layers that communicate in parallel via full interconnections of PEs between layers. The strengths of interconnections are called weights. In training sessions of a network, learning is composed by a training algorithm and paradigm, causing pattern updated adjustments in weight strengths. The training algorithm preferably used herein is called Backward-Error Propagation (BEP), (sometimes referred to as back-propagation) a method of error analysis where perturbations put on the weights are distributed in a manner to reduce overall network epoch error. All weights of the network are dimensions in the domain of the BEP transform (weight space). The learning paradigm (convergence to a correct result) uses a gradient descent method for seeking global minima in this weight-dimensional space.
The network topology is disorganized before training; neural pathways are randomized with no associative learning. Training data consists of the use of characteristic patterns of the object or objects being analyzed. The goal of the network is to associate each class of characteristic patterns to a defined representation. Learning by the network is complete when error in the defined representation is less than a pre-specified small number. A typical number is less than 1% of the defined representation. At this condition the network has converged yielding an answer.
A training record associated with the training of a neural network consists of an input and an output. The input is a characteristic pattern recorded of a vibrating structure excited to vibrate at very low amplitude. The structure is excited to vibrate in a normal or resonant mode, and the characteristic pattern shows the mode shape. In one method, a characteristic pattern is generated using electronic or television holography. Television holography is available commercially in more than one form, and is discussed extensively in the literature.
Neural network processing of characteristic patterns of vibrating structures is used routinely for non-destructive evaluation. The characteristic patterns are generated using electronic time-average holography of the vibrating structure and are sub-sampled before processing. The lower resolution patterns containing a few hundred to a few thousand pixels are then presented to an experimentally trained neural network. The neural network is trained to detect small changes in the characteristic patterns resulting, for example, from structural changes or damage.
The neural network electronic-holography combination used to detect structural changes and damage has evolved through several stages. One such current combination is experimentally trained and is immune to the laser speckle effect. This combination can be used with cameras that are operated at 30 frames per second and uses feed-forward artificial neural networks (multi-layer perceptrons) very efficiently. The feed-forward architecture, known in the art, is probably the most familiar architecture for so-called artificial neural networks and has many benefits to recommend its usage.
An artificial neural network is sometimes defined to be any processing system that is programmed with a training set of exemplars. As a specific example, the feed-forward neural network (net) remains compact in software as the size of that training set increases; has good noise immunity; and can be trained with straightforward algorithms of the back-propagation genre. The feed-forward net can process fairly large input images, if the number of hidden-layer nodes is not too large, and is well suited to processing speckled characteristic patterns at 30 frames per second when those characteristic patterns contain a few hundred to a few thousand pixels.
Feed-forward artificial neural networks or multi-layer perceptrons, known in the art, do have a reputation at times for being unable to learn training sets that are deemed otherwise to be learnable. Nevertheless, it has been known for some time that the performance of feed forward artificial neural networks can be enhanced greatly by conditioning the inputs. A proprietary functional-link net transforms inputs mathematically, before subjecting them to the back-propagation algorithm. Another practice that improves learning is to scale the individual pixels of the training exemplars to cover the entire input range of the feed forward net. So-called min-max tables of the minimum and maximum pixel values are used for scaling. Learning of characteristic patterns does improve with positional scaling, but the associated neural networks are susceptible to over-training. Furthermore, the associated neural networks often do not achieve the sensitivity desired for non-destructive evaluation procedures. It is desired that the sensitivity of neural networks be improved without suffering the consequences of over-training.