Numerous digital image processing procedures, such as identification of persons in digital photographs and redeye correction procedures, find human eyes in digital images. In many of these procedures, the located position of human eyes is approximate. This is suitable for some purposes, but deleterious for others.
Redeye correction can be improved by accurate determination of eye locations. The term “redeye” refers to red light from flash illumination that is reflected off the eye retina of a human subject and back out through the pupil to the camera. Redeye is commonly a problem in cameras that have a flash unit located close to the taking lens. In a digital image, a “redeye defect” is a cluster of one or more pixels that exhibit the red coloration characteristic of redeye. A “redeye defect pair” is a pair of clusters within a digital image that can be classified, based upon such characteristics as relative sizes and locations, as representing light from a right eye and a left eye of the same person.
Many algorithms have been proposed to correct redeye, with the goal of generating an improved image, in which the pupils appear natural. In those algorithms, image pixels that need to undergo color modification are determined along an appropriate color or colors for the modified pixels.
In some redeye correction procedures, redeye is detected manually. An operator moves a cursor to manually indicate to a computer program the redeye portion of a digital image. This approach is effective, but labor-intensive and slow.
Automated detection of redeye pixels can be faster, but it is often the case that the boundary of a redeye defect is not well defined. This is also a problem in a semi-automated approach, in which a user indicates an eye location by setting a single point. When determining redeye defect pixels, it is easy for an algorithm to mistakenly miss pixels that should be considered redeye and/or include pixels that are not really redeye. When coupled with defect correction, these misclassifications can produce objectionable artifacts. An under-correction occurs when some redeye pixels are correctly identified and color corrected, but others are not. As a result, a portion of the human subject's pupil can still appear objectionably red. An over-correction occurs when non-redeye pixels are mistakenly considered redeye, and the color modification is applied. As a result, a non-pupil portion of the human face, such as the eyelid, can be modified by the color correction normally applied to redeye pixels, resulting in a very objectionable artifact.
In correcting redeye pixels, modified pixels are often blended with neighboring pixels of the original image to reduce unnatural harsh edges. For example, in U.S. Published Patent Application Ser. No. 2003/0007687A1 a blending filter is used. If such a blending filter is of uniform size for all human images, then some (typically relatively small) human faces having redeye defects may appear over smoothed, with objectionable blurriness. Other, relatively large human faces may retain harsh edges. A solution to this problem, disclosed in U.S. Published Patent Application Ser. No. 2003/0007687A1, is operator control of the level of blending. This may be effective, but it is another labor-intensive and slow procedure.
In Yuille et al., “Feature Extraction from Faces Using Deformable Templates,” Int. Journal of Comp. Vis., Vol. 8, Iss. 2, 1992, pp. 99-111, the authors describe a method of using energy minimization with template matching for locating the eye and iris/sclera boundary.
In Kawaguchi et al, “Detection of the Eyes from Human Faces by Hough Transform and Separability Filter”, ICIP 2000 Proceedings, pp. 49-52, the authors describe a method of detecting the iris sclera boundary in images containing a single close-up of a human face.
U.S. Pat. No. 6,252,976 and U.S. Pat. No. 6,292,574 discloses methods for detecting red eye defects, in which skin colored regions of a digital image are searched for pixels with color characteristics of red eye defects to determine eye locations.
U.S. Pat. No. 6,134,339 discloses a method, in which pixel coordinates of red eye defects are analyzed for spacing and spatial structure to determine plausible eye locations. Template matching is used.
The above approaches tend to have limited applicability or place large demands on processing resources.
Algorithms for finding shapes are known. The Hough transform method is described in U.S. Pat. No. 3,069,654. Kimme et al., “Finding Circles by an Array of Accumulators,” Communications of the ACM, Vol. 18, No. 2, February, 1975, pp. 120-122, describes an efficient method for determining circles from an array of edge magnitudes and orientations. A RANSAC fitting routine is described in Hartley and Zisserman, Multiple View Geometry, 2000, pp. 101-107.
It would thus be desirable to provide eye detection methods and systems, in which iris boundaries can be detected with relatively good efficiency and moderate computing resources. It would further be desirable to provide eye detection methods and systems, in which redeye defects can be used, but are not mandatory.