This invention relates generally to systems and methods wherein imagery is acquired primarily to determine or verify the identity of an individual person using biometric recognition.
Biometric recognition methods are widespread and are of great interest in the fields of security, protection, financial transaction verification, airports, office buildings, but prior to the invention their ability to correctly identify individuals, even when searching through a small reference database of faces, has always been limited. There are typically false positives (which means that the incorrect person was identified) or false negatives (meaning that the correct person was not identified).
There are several reasons for such poor performance of biometric recognition methods.
First, when comparing two faces (from the same person or from different persons), it is important that the biometric templates or features are registered so that corresponding features (nose position for example) can be compared accurately. Even small errors in registration can result in matching errors even if the faces being compared are from the same person.
Second, for facial or iris recognition, it is important that the recognized face or iris and reference face or iris have the same, or very similar, pose. Pose in this context means orientation (pan, tilt, yaw) and zoom with respect to the camera. Variations in pose between the images again results in matching errors even if the faces being compared are from the same person.
Third, the dynamic range or sensitivity of the sensor may not be sufficient to capture biometric information related to the face. For example, some biometric systems are multi-modal, which means that they use several biometrics (for example, iris and face), to improve the accuracy of recognition. In such multiple biometric systems and methods there are problems in assuring that each of the sets of data are from the same person, for example the system may unintentionally capture the face of a first individual and iris of a second individual, resulting in an identification or match failure. Another problem with such multiple biometric systems is difficulty of obtaining good data for each of the separate biometrics, e.g., face and iris because, for example. However, the albedo or reflectance of one biometric material (the iris for example) may be very different to the albedo of a second biometric (the face for example). The result is that the signal reflected off one of the two biometrics is outside the dynamic range or sensitivity of the camera and are either saturated or in the dark current region of the camera's sensitivity or simply appears as a uniform gray scale with very poor contrast of biometric features, while the second biometric signal is within the dynamic range or sensitivity of the camera and has sufficient signal to noise ratio to enable accurate biometric or manual recognition.
Fourth, the illumination may vary between the images being matched in the face recognition system. Changes in illumination can result in poor match results since detected differences are due to the illumination changes and not to the fact that a different person is being matched.
Since reflectance of a face is different from that of an iris, acquiring an image of an iris and a face from the same person with a single sensor according to prior methods and systems has yielded poor results. Past practice required two cameras or sensors or, in the cases of one sensor, the sensor and illuminators were operated at constant settings.
For example, Adam, et al., U.S. Pat. Publ. 20060050933 aims to address the problem of acquiring data for use in face and iris recognition using one sensor, but does not address the problem of optimizing the image acquisition such that that the data acquired is optimal for each of the face and iris recognition components separately.
Determan, et al., U.S. Pat. Publ. 20080075334 and Saitoh, et al., US Pat. Publ. 20050270386 disclose acquiring face and iris imagery for recognition using a separate sensor for the face and a separate sensor for the iris. Saitoh claims a method for performing iris recognition that includes identifying the position of the iris using a face and iris image, but uses two separate sensors that focus separately on the face and iris respectively and acquires data simultaneously such that user motion is not a concern.
Determan also discusses using one sensor for both the face and iris, but does not address the problem of optimizing the image acquisition such that that the data acquired is optimal for each of the face and iris recognition components separately.
Jacobson, et al., in U.S. Pat. Publ. 20070206840 also describes a system that includes acquiring imagery of the face and iris, but does not address the problem of optimizing the image acquisition such that that the data acquired is optimal for each of the face and iris recognition components separately.