In the last few years, face expression measurement has been receiving significant attention and many research demonstrations and commercial applications have been developed as a result. The reasons for the increased interest are multiple but mainly due to advancements in related areas such as face detection, face tracking and face recognition as well as the recent availability of relatively cheap computational power. Face expression measurement is widely applicable to different areas such as image understanding, psychological studies, facial nerve grading in medicine, face image compression, synthetic face animation, video indexing, robotics and virtual reality.
Face expressions are generated by contractions of facial muscles, which result in temporary deformed facial features such as eyelids, eyebrows, nose, lips and skin texture, often revealed by wrinkles and bulges. Typical changes of muscular activities are often brief, lasting for a few seconds, but rarely more than five seconds or less than 250 milliseconds. Face expression intensities are measurable by determining either geometric deformations of facial features or the density of wrinkles appearing in certain face regions. Two main methodological approaches typically used for measuring the characteristics of face expressions are: judgement-based and sign vehicle-based approaches.
An initial step performed by a typical face recognition system is to detect locations in an image where faces are present. Although there are many other related problems of face detection such as face localization, facial feature detection, face identification, face authentication and face expression recognition, face detection is still considered as one of the foremost problem in respect of difficulty. Most existing face recognition systems typically employ a single two-dimension (2D) representation of the face of the human subject for inspection by the face recognition systems. However, face detection based on a 2D image is a challenging task because of variability in imaging conditions, image orientation, pose, presence or absence of facial artefacts, face expression and occlusion.
Efforts to address the shortcomings of existing face recognition systems deal with technologies for creation of three-dimensional (3D) models of a human subject's face based on a 2D digital photograph of the human subject. However, such technologies are inherently susceptible to errors since the computer is merely extrapolating a 3D model from a 2D photograph. In addition, such technologies are computationally intensive and hence might not be suitable for deployment in face recognition systems where speed and accuracy are essential for satisfactory performance.
Hence, in view of the foregoing problems, there affirms a need for a method for providing improved face detection to enable face expressions identification of image objects.