While there has been significant work in face detection (see, for example, Nguyen, D., Halupka, D., Aarabi, P., Sheikholeslami, A., “Real-time Face Localization Using Field Programmable Gate Arrays”, IEEE Transactions on Systems, Man, and Cybernetics, Part B, Vol. 36, No. 4, pp. 902-912, August 2006), there seems to have been little work in the area of face modification, hair restyling and transforming, and “facelifting” for digital images.
Specifically, U.S. Pat. No. 6,293,284 to Rigg describes a method and apparatus utilizing manual user interaction in order to recolor the facial features and to simulate the effects of cosmetic products. Unfortunately, this approach does not utilize advanced image processing, computer vision or machine learning methodologies and does not simulate plastic surgery procedures such as facelifts. As such, a user has to spend significant time and effort in order to manually enter the parameters for the facial recoloring.
Virtual plastic surgery is the focus of U.S. Pat. Nos. 5,854,850 and 5,825,941 to Linford et al. and U.S. Pat. No. 5,687,259 to Linford. However, the system disclosed in these references is relatively complicated and is intended to be an in-clinic system used by professional or experienced operators. Further, the system is not provided on the Internet or through mobile and wireless devices, and does not address utilization of advanced image processing, computer vision or machine learning methodologies for estimating the plastic surgery parameters. As a result, operators are required to manually adjust the system parameters in order to display the results of plastic surgery in a virtual fashion. This system is mostly manual, and does not utilize face localization, feature detection, facelifts, or feature/face recoloring on an automatic or semi-automatic basis.
The method disclosed in U.S. Pat. No. 6,502,583 to Utsugi utilizes image processing in order to simulate the effects of makeup on a target face. This system, however, does not utilize automatic or semi-automatic face detection, feature detection, or parameter estimation and as a result requires manual user input for estimating the necessary parameters. Furthermore, this system was not intended for general virtual face modifications, and does not perform virtual plastic surgery nor does it perform hair restyling/transformation.
The method and system of U.S. Pat. No. 6,453,052 to Kurokawa et al. utilizes pre-stored hair style to restyle a user image. In other words, it is a unidirectional hair replacement that does not allow the ability to extract hair styles from one image, and place that style in another image. As well, this system or method is only a unidirectional hair replacement system, not being capable of face readjustment, replacement, or modification. Finally, this system requires hair style with basic information to be stored, and does not claim an automatic method for such information extraction.
The system and method of U.S. Pat. No. 6,937,755 to Orpaz discloses a manual method for visually demonstrating make-up cosmetics and fashion accessories. This visualization requires manual user inputs in order to work effectively (i.e. it is neither automatic nor semi-automatic), and does not allow for hair restyling, advanced face modifications such as facelifts, or face feature e-coloring and replacement on an automatic or semi-automatic basis.
A system and method is disclosed in U.S. Pat. No. 5,495,338 to Gouriou et al. which utilizes eye information (such as the inner eye colors) in order to estimate the ideal eye makeup for a given eye. However, this approach is purely a cosmetics suggestion system; it does not perform any face adjustment, hair restyling, or face recoloring automatically, semi-automatically, or even manually.
U.S. Pat. No. 5,659,625 to Marquardt discloses a method involving a geometric model to fit the face. These geometric models can be used for face animation as well as for cosmetics applications. However, this system, again, does not achieve automatic or semi-automatic feature modification, facelifting, or hair restyling.
A method for locating the lips of a face by bandpass filtering is described in U.S. Pat. No. 5,805,745 to Graf. However, this reference does not disclose a means for detecting other features of the face, neither does it describe automatic or semi-automatic face modifications, facelifts, or hair restyling. Furthermore, the bandpass filtering method is unsophisticated, and does not involve feature extraction methods utilizing edge, color and/or shape information, or relative feature and face information processing in order to accurately locate the facial features.
The method and apparatus described in U.S. Pat. No. 5,933,527 to Ishikawa allows a user to specify a search range which is then used to search for specific facial features. However, the approach taught therein is not capable of automatic facial feature detection, and is incapable of automatic or semi-automatic advancement face processing algorithms such as facelifts. Further, there is no mention of an application operable to switch the features of one face with another automatically or semi-automatically, and there is no means for hair restyling or replacement.
Finally, U.S. Pat. No. 7,079,158 to Lambertsen describes a virtual makeover system and method. However, the reference does not disclose a means for virtual operations on the face or automatic or semi-automatic advanced face modification such as facelifts, and suffers from a relatively complicated user interface.
In addition to these prior art references, there are several systems provided on the Internet that are operable to perform manual face modification, for example, EZface™, Approach Infinity Media™, and others exist. However, none of these systems are capable of face feature modification, hair restyling, advanced face processing such as facelifts, either automatic or semi-automatic. As well, all of these systems employ Macromedia™ flash technology which places a heavier computational burden on the client/user computers and is not easily capable of being widely employed on mobile phones and handheld computers. Finally, the user interface complexity of all these systems is problematic as they are generally difficult to use, complicated to adjust, and far more elaborate to use than a simple “choose and modify” approach.
In view of the foregoing, what are needed are methods and systems for modifying digital face images that overcome the limitations of the prior art described above. In particular, what is needed is a method and system employing advanced detection and localization techniques for enabling automatic and/or semi-automatic image modification. Further, what is needed is a method and system where facial modifications are processed on host servers instead of the user computers. In addition, what is needed is a method and system that is simple, easy to use, and capable of being implemented on a variety of devices.