1. Technical Field
The present invention relates to object recognition systems and more particularly to a neural network based system and method for verifying a match between two objects patterns.
2. Discussion
The task of automatic object recognition represents one of the major challenges to modern computational systems. One frequently encountered problem in object recognition is the task of recognizing a match between a known and an unknown object. One example of this problem occurs in the field of face recognition. In many applications it would be desirable to have a system which can compare a previously acquired image of a face with a "live" image to determine if the two facial images are those of the same person or not.
This task of facial matching is fraught with difficulties due to the many unpredictable differences which may occur between the previously stored facial image and the live image. For example, these differences may include one or more of the following: mis-registration between the two images, resulting from differences in the height of the face or the tilt of the head etc.; different lighting conditions which will result in different shadows which greatly affects the contrast distribution of the pattern of the eyes and the face; changes in the individual's appearance due to different hairstyles, make-up, jewelry, facial hair, facial expressions, etc.; different background clutter in the image; the facial images may be turned to the side which greatly affects the appearance of facial features.
Because of these and other variations it is very difficult for existing computational systems to recognize when two facial images are from the same person. Some progress has been made in this area by adaptive systems, such as neural networks which have demonstrated an ability to generalize facial features based on training examples despite the above-described kinds of variations. One example of a neural network system of this sort is French Patent No. 2,688,329 issued to B. Anjeniol. Even so, there has still not been satisfactory performance by neural network systems for facial recognition where the variations in the images are as large as those encountered in real-life applications.
For example, in some approaches the system will find features, such as eyes, nose, mouth, etc. and then determine facial recognition using the ratios of those features in a neural network. However, since not all features are well-defined and since the features change with different facial orientations and different facial expressions, these ratios also change with the different orientations and expressions. As a result, systems of this sort are not always reliable when confronted with different orientations and expressions.
Another approach utilizes an approach called eigenfaces. Eigenfaces are facial features used to discriminate features of one face from those of another. In this approach, an average face is derived and the difference from a target face are determined in terms of the eigenfaces. For additional information about this and other techniques see the article "Face Value", Byte, February 1995, page 85-89, which is herein incorporated by reference.
However the eigenfaces approach is sensitive to changes in head orientation and lighting conditions because it uses the difference between the target face and the average of all faces as its primary means of comparison.
An additional problem with prior face recognition systems has been one of storage capacity and through-put. With regard to storage capacity, where it is desired to recognize a large number of different faces, the volume of information that needs to be stored can be very large. Even with the use of data compression techniques, facial recognition systems which rely on stored information about known faces when making comparisons with the live face can require an impractical amount of storage space for applications where a reasonably large number of faces need to be recognized.
A related problem is the excessive computational time required to perform an analysis of facial images to determine whether a match is present. Many conventional techniques require massive computational capabilities and/or long computational time to perform the required analysis of the images. Though-put is a problem in many applications where access to a system is required very rapidly and long delays for analysis of facial images cannot be tolerated.
Furthermore, even before a match between test and reference facial images can be attempted, an accurate location of the face must be determined. This task is problematic due to the aforementioned variations in the image. Particularly difficult is the task of locating the face amid background clutter and variable amount of hair on the head.
Thus it would be desirable to provide a system and method for accurately determining the location of a face in an image having background clutter.
It would also be desirable to provide a system and method for accurately performing facial recognition which does not require the storage of a large database and which also does not require excessive computational time.