When an image of a person is captured with the use of a flash, the pupils of the person may look red or gold in the captured image. This is called the red-eye or gold-eye effect. The red-eye or gold-eye effect is not a very favorable result for the image-captured person. Therefore, a variety of image processing methods have been proposed to correct the red-eye or gold-eye effect so that the pupils show their natural color in the captured image. Here, several exemplary methods will be described in the following. A user is asked to designate a to-be-processed region that includes the eyes with an incorrect color. Based on the color value of the designated to-be-processed region obtained with reference to the hue, saturation and lightness of the to-be-processed region, a red eye correcting process is performed on the pupils of the eyes (see Patent Document 1, for example). Alternatively, a captured image is added with image-capturing information including the information about the flash used, the exposure value (Ev value), the shutter speed, and the aperture value), and the red eye correcting process is performed only on images which are judged to possibly have the red-eye effect therein (see Patent Document 4, for example).
Many other methods have been proposed. A candidate region which includes therein a face of a person is extracted from an image. Subsequently, the extracted candidate region is divided into a plurality of smaller regions and compared with a face region pattern in which the characteristic values of the plurality of smaller regions are set in advance, so that a face region is extracted from the candidate region (see Patent Document 2, for example). Alternatively, a region showing a face of a person is extracted from an image. When the color density of the extracted face region corresponds to a predetermined threshold value, a candidate region which may include the person's trunk is extracted. In this way, the extracted face region is evaluated in terms of accuracy (the likelihood where the extracted region shows the face of the person) based on the color densities and saturations of the face and trunk regions, so that the face region is accurately extracted (see Patent Document 3, for example). As a further alternative example, a plurality of candidate regions which may show the face of a person are extracted from an image. Subsequently, the accurate face region is extracted in such a mariner that each of the extracted candidate face regions is evaluated in terms of accuracy based on the degree of overlapping (see Patent Document 5, for example).    [Patent Document 1] Unexamined Japanese Patent Application publication No. 2000-76427    [Patent Document 2] Unexamined Japanese Patent Application publication No. 2000-137788    [Patent Document 3] Unexamined Japanese Patent Application publication No. 2000-148980    [Patent Document 4] Unexamined Japanese Patent Application publication No. 2004-145287    [Patent Document 5] Unexamined Japanese Patent Application publication No. 2000-149018