Various researchers have shown that ear recognition is a viable alternative to more common biometrics, such as fingerprint, face and iris. The ear is stable over time, less invasive to capture, and does not require as much control during image acquisition as other biometrics. It is reasonable to assert that there are fewer privacy issues for the ear than there are for the face.
Traditionally, ear recognition research has been performed on ear images that were captured in an ideal setting. In an ideal setting, the ears are all captured in the same position, with identical lighting, and identical resolution. With the advances in computer vision and pattern recognition techniques, research of ear recognition is shifting to a more challenging scenario whereby ear images are acquired from real-world settings, commonly referred to as “unconstrained ears” or “ears in the wild”.
FIG. 1 illustrates the difficulty in recognizing individuals using ears in the wild. FIG. 1 illustrates the challenging task for ear recognition in an unconstrained setting wherein one is given five images of four different subjects and asked to determine which pair of images belongs to the same person. This specific example primarily illustrates the problem of pose variation, but many other factors may affect the recognition performance, such as different acquisition devices, low resolution, illumination variations, occlusions caused by hair and head accessories, earrings, headsets and so on. To overcome these recognition challenges, ear recognition must achieve good results for non-cooperative subjects. This will make ear biometric recognition very useful for practical purposes, like video surveillance and continuous authentication.
Accordingly, what is needed in the art is an improved method for ear biometric recognition. However, in view of the art considered as a whole at the time the present invention was made, it was not obvious to those of ordinary skill in the field of this invention how the shortcomings of the prior art could be overcome.