A pattern matching processing is a processing which determines whether or not a plurality of patterns targeted for the pattern matching processing are identical patterns. The pattern is face image, fingerprint image, speech-signal waveform or the like. This pattern matching processing is a technology known as a particularly important technology in a field of biometrics authentication.
An example of the pattern matching processing will be briefly described below. First, a matching score is calculated, the matching score represents how much two patterns targeted for the pattern matching processing are similar. As one of methods for calculating the matching score, there is a method in which the matching score is calculated by using, for example, a feature vector extracted from the pattern targeted for pattern matching processing and a preliminarily prepared feature-vector group which is called a discrimination dictionary. In most of cases, the discrimination dictionary is generated by machine learning using a large number of pattern examples.
After the matching score has been calculated in such a way as described above, the matching score is compared with a threshold value, and a matching result (that is, whether or not the two patterns targeted for the pattern matching processing are identical) is determined. In this way, the pattern matching processing is carried out.
By the way, as one of problems on the pattern matching processing, there exists a problem described below. The existing problem is a problem which is caused by a quality degradation of a pattern (in other words, unsharpness (indistinctness) of a pattern). That is, in a case where a degree of the quality degradation of the pattern is large, sometimes, information necessary to the pattern matching processing (for example, information representing features of a face, a fingerprint or the like) is lost. For example, in a case where facial images of persons targeted for matching processing are blurry and indistinct, a facial image of any one of the persons results in resembling facial images of the other ones of the persons. For this reason, in a face matching processing based on blurry facial images, although two facial images targeted for the pattern matching processing are not images of an identical person, an erroneous determination that the two facial images are images of an identical person is likely to be made.
In this regard, there has been proposed a method for estimating the qualities (degradation degrees) of patterns and performing the pattern matching processing to the patterns in view of information related to the estimated qualities. In PTL 1 (Japan Patent Application Laid-Open Publication No. 2010-129045), there is disclosed a technology which, in fingerprint matching processing, detects (determines) a blur of a fingerprint image as information for estimating the quality of each pattern. In PTL 2 (Japan Patent Application Laid-Open Publication No. 2011-002494), there is disclosed a technology which, in speech recognition processing, estimates a sound quality level as information estimating the quality of the pattern.
Further, in PTL 3 (Japan Patent Application Laid-Open Publication No. 2007-140823), there is disclosed a technology which performs the pattern matching processing in view of condition under which the pattern is photographed (for example, a lighting environment, a direction of a face, presence or absence of a wearing object (for example, sunglasses)).
Moreover, in PTL 4 (Japan Patent Application Laid-Open Publication No. 2006-072553), there is disclosed a technology which, in biometrics authentication, in a case where an input image is indistinct, performs the pattern matching processing after having corrected the image. Furthermore, in PTL 5 (Japan Patent Application Laid-Open Publication No. 2008-107408), there is disclosed a technology which, in speech recognition processing, performs the speech recognition processing in view of an ambient environment.