Today, through use of digital photography, consumers are able to easily capture and store large collections of personal digital images. These image collections can be stored either locally on a personal computing device or stored on an online photo management service such as Kodak Gallery which maintains digital image collections for a large number of users and allows users to share their digital images with other users via the internet.
As these image collections become large, however, it can be challenging for a user to search for and retrieve specific desired images from among the collection. For example, a user may wish to retrieve digital images containing a specific person. In response to this need, early digital photo management products provided users with tools for “tagging” individual images by entering a text description identifying individuals shown in each image and saving this information as searchable metadata to the image file or in an associated database for the image files. The user could then easily retrieve all images containing one or more of the desired metadata tags. However, in these early implementations, the user was required to browse through each image to manually identify which images contained people, and to then enter the identity of each individual which would then be saved as metadata in the digital image file. A significant disadvantage of this approach is that manual identification of all the individuals appearing in a large digital image collection can be very time consuming.
Squilla, et al. in U.S. Pat. No. 6,810,149 teach an improved method wherein image icons showing, for example, the face of various individuals known to the user are created by the user, and subsequently used to tag images in a user's digital image collection. This visually oriented association method improves the efficiency of the identification process.
More recent image digital photo management products have added face detection algorithms which automatically detect faces in each digital image of a digital image collection. The detected faces are presented to the user so that the user can input the identity of the detected face. For example, the user can input the identity of a detected face by typing the individual's name or by clicking on a predefined image icon associated with the individual.
Even more advanced digital photo management products have added facial recognition algorithms to assist in identifying individuals appearing in a collection of digital images. An example of such a photo management product is shown in U.S. Patent Application Publication 2009/0252383. Such facial recognition algorithms can be used compare the detected faces to faces which have been previously identified. However, facial recognition algorithms can still provide erroneous results, confused by people which have a similar appearance. Current facial recognition algorithms typically assign a probability of a match of a target image to images which are been previously identified based on one or more features of a target face, such as eye spacing, mouth distance, nose distance, cheek bone dimensions, hair color, skin tone, and so on.
Examples of facial recognition techniques can be found in U.S. Pat. Nos. 4,975,969 and 7,599,527. When facial recognition is performed against individuals with similar appearances, facial recognitions algorithms can often return the incorrect individual. For example, current facial recognition algorithms may have difficulty in distinguishes between two individuals who are identical twins. With a large digital image collection, a large number of different individuals can be identified thereby increasing the opportunity for the facial recognition algorithm to return an incorrect result.
In the article “Efficient Propagation for face annotation in family albums” (Proceedings of the 12th ACM International Conference on Multimedia. pp. 716-723, 2004), Zhang et al. teach a method for annotating photographs where a user selects groups of photographs and assigns names to the photographs. The system then propagates the names from a photograph level to a face level by inferring a correspondence between the names and faces. This work is related to that described in U.S. Pat. No. 7,274,822.
In the article “Toward context aware face recognition” (Proceedings of the 13th ACM International Conference on Multimedia, pp. 483-486, 2005), Davis et al. disclose a method for improving face recognition accuracy by incorporating contextual metadata.
U.S. Patent Application Publication 2007/0098303 to Gallagher et al., entitled “Determining a particular person from a collection,” discloses using features such as person co-concurrence to identify persons in digital photographs.
U.S. Patent Application Publication 2007/0239683 to Gallagher, entitled “Identifying unique objects in multiple image collections,” teaches a method for determining whether two persons identified in separate image collections are the same person using information such as user-provided annotations and connections between the collections.
U.S. Patent Application Publication 2007/0239778 to Gallagher, entitled “Forming connections between image collections,” describes a method for establishing connections between image collections by determining similarity scores between the image collections.
In the article “Autotagging Facebook: social network context improves photo Animation” (First IEEE Workshop on Internet Vision, 2008), Stone et al. teach using social network context to improve face recognition by using a conditional random field model.
U.S. Patent Application Publication 2009/0192967 to Luo et al., entitled “Discovering social relationships from personal photo collections,” discloses a method for determining social relationships between people by analyzing a collection of images.
U.S. Patent Application Publication 2009/0046933 to Gallagher, entitled “Using photographer identity to classify images,” teaches a method for identify persons in a photograph based on the identity of the photographer.
There is a need for an improved process for assisting a user in classifying individuals in a digital image, particularly when such digital image is a part of a large collection of digital images. Furthermore, comparing faces in a large database of images can be time consuming as the number of known faces increases. It is desirable that this recognition execution time be reduced.