1. Field of the Invention
The present invention relates to biometric authentication systems and in particular to authentication systems which are configured to dynamically apply different matching thresholds depending on the circumstances of the authentication process. The invention also relates to biometric authentication systems that are configured to dynamically update the matching thresholds used during the authentication process based on user activity within the system.
2. Description of the Related Art
Systems for authenticating the identity of an individual are now becoming widely deployed. Such systems may be used for a plurality of different reasons, for example, to enhance security at a border crossing, to identify individuals in a citizen ID scheme, to allow physical access to a building, to provide logical access to networks and computer applications, to prove identity during retail transactions, amongst many other possible applications.
Known techniques used within such authentication systems for validating the identity of an individual include the use of passwords, tokens, biometrics, or any combination of these. Within a biometric-based system, biometric samples are initially captured from an individual and enrolled, or stored, within the system for use in later authentications. Examples include fingerprint, iris, or face images, or a recorded sample of a voice.
Features may be extracted from the image to generate biometric templates. These are usually a smaller compact representation of the biometric features present in the image. Typically, the templates are used in the day-to-day operations of the system to authenticate individuals whereas the original biometric data or image is stored or archived.
At a later date or time a user supplied biometric sample is then tested against the stored template and if a match, within a desired confidence level, is obtained then the user is authenticated. An authentication may be performed using one or more biometric samples from one or more biometric modes. For each individual sample, the verification sample is typically compared against one or more enrollment samples or templates derived from those samples. This comparison is performed by one or more matching algorithms, which typically output a matching score indicating the similarity between the two compared samples. Ideally, this score will be well-distributed across all possible samples in the space.
As such, a biometric authentication system may be considered as constructed of a plurality of different components; a sensor to record the presented biometric, a computer unit to process and eventually save the presented biometric and an application, for which the user's authentication is necessary. The computer unit which processes the presented biometric for authentication or otherwise, includes a processing unit that includes a “feature extraction unit”which filters the uniqueness data out of the raw data coming from the sensor and combines them into the request template, a “matcher” which compares the request template with the reference template and delivers a “score” value as result, and a “decision unit” which takes the score value (or values) as well as the threshold to determine whether the user is authorized or non-authorized.
The matcher incorporates a matching algorithm which will deliver some level of biometric accuracy depending on the acquisition devices, acquisition methods, quality scoring algorithms, environment, and population present, amongst other factors. The “decision unit” compares the score derivable from the matcher with a pre-established threshold and if the score meets an established confidence level then the user is authenticated and if it fails they are rejected.
In order to define an appropriate threshold value for a specific authentication engine one considers a number of measures which determine the effectiveness of a biometric authentication system and examples of these are listed below:
A False Acceptance Rate (FAR) is a measure of the frequency that a non authorized person is accepted as authorized, given a number attempts to perform that authorization. Such authorization of a non authorized person is an obvious security breach and as a result a FAR is generally considered a security relevant measure. A FAR may be considered a systematic indicator.
A False Rejection Rate (FRR) is similar to the FAR but is the frequency that an authorized person is rejected access. In a verification system, such authorization failure of an authorized person is considered as an inconvenience to that person. This does not constitute a security breach in the same way as a FAR.
A False Match Rate (FMR) is the rate which non-authorized people are falsely recognized during the matching comparison process. Therefore the FMR may be considered an indicator of the effectiveness of the matching algorithm as opposed to the system error.
A False Non-Match Rate (FNMR) is the rate that authorized people are falsely not recognized during feature comparison. Similarly to the FMR, and in contrast to the FRR, attempts previously rejected are not accounted for, and it can therefore be considered an algorithm indicator.
The above definitions are described in the context of a verification or positive identification system where the purpose is to confirm that the authorized person is enrolled in the system. In a negative identification system the purpose is to confirm that a person does not appear in the enrolled system; sample applications include use of criminal watch-lists or ensuring that a new applicant to a system (e.g. voting) is not already registered in that system. In the case of negative identification the FRR or FNMR does constitute a security breach because if the person is incorrectly rejected then that person has avoided detection even though they are already enrolled. Similarly, for negative identification the FAR or FMR constitutes an inconvenience for the genuine person because they have been incorrectly matched against another person already enrolled in the system .
The effectiveness of the authentication system can be measured using either a combination of the FAR with FRR or FMR with FNMR and it is important to ensure that their ratios are kept to an appropriate level. It is important for an effective system that too many who should be allowed access are not allowed access whereas too many who should not be allowed access are allowed access. There is a trade off in these measurements and this tradeoff defines the threshold for the system.
To determine this ratio, it is known by the suppliers of biometric processing algorithms to generate receiver operating characteristic curves (ROCs). In order to create an ROC curve, a biometric system test usually starts by determining the similarities of different biometric features and a saved reference feature. After many measurements, one receives a histogram or distribution for authorized users and another for unauthorized users showing the frequency of matches per similarity rating. In an ideal case, the two distribution graphs should overlap as little as possible. Through integration of these distribution graphs, FAR/FMR and/or FRR/FNMR graphs are determined, which are dependent on the data from which they were generated. If one wants to compare different biometric systems, it is problematic in that algorithmic value “similarities” or, inversely, “distances” are defined very differently, and therefore threshold values often have incomparable meanings. This difficulty is avoided by a ROC, in which the similarity threshold parameter is eliminated and FRR or FNMR is seen as a function of FAR or FMR respectively. By plotting either the FRR as a function of the FAR or the FNMR as a function of the FMR it is possible to visualize the performance of the system and to choose from the values generated a threshold value which will give an appropriate level of confidence in the security achievable with the system. Many suppliers of biometric algorithms provide to the vendors of the authentication systems one such threshold value from the generated ROC curve for their specific algorithm and this can be considered a fixed static score threshold value. It will be appreciated that a ROC is just a graphical representation of a threshold table.
Existing deployments typically just select this one supplied fixed static score threshold, and use this as the matching decision threshold. For example, scores at or above the threshold are accepted as successful matches, and scores below are rejected, or vice-versa. The problem with such a static threshold is that no account is taken of the specifics of the enrolment process, the specifics of the authentication process or other parameters.
It will be further understood that the definition or construction of a ROC is dependent on the usage pattern of the parameters that define that specific ROC. As the ROC is a statistical representation of the results obtained using a specific set of data parameters it will be understood that the greater the statistical population that is used to define the ROC, the better the representation the ROC will be of the overall efficiency of the system. Furthermore it is possible that continued use of authentication systems using a specific ROC may provide an indication that the parameters making up that ROC are not optimized and require tuning or some other form of modification. However, heretofore once a biometric authentication system has been deployed it stands alone and the threshold values that are used to authenticate the user are based on the parameters that were available at the time of deployment of the system. These parameters can, over time, become superseded or redundant but this is not reflected in the authentication process. As a result the security levels and/or convenience levels that are achievable with an out of date authentication system are less than is possible with an up-to-date system.
There is therefore a need to provide an improved authentication system and process which can take these variances into account when performing the authentication. There is another need to provide an improved authentication system that can provide a dynamic update of the threshold parameters that are used for the authentication process based on usage data of the system.