1. Field of the Invention
The present invention provides an improved method and apparatus for image processing in acquisition devices. In particular the invention provides improved real-time face tracking in a digital image acquisition device.
2. Description of the Related Art
Face tracking in digital image acquisition devices includes methods of marking human faces in a series of images such as a video stream or a camera preview. Face tracking can be used to indicate to a photographer the locations of faces in an image, thereby improving acquisition parameters, or allowing post processing of the images based on knowledge of the locations of the faces.
In general, face tracking systems employ two principle modules: (i) a detection module for locating new candidate face regions in an acquired image or a sequence of images; and (ii) a tracking module for confirming face regions.
A well-known fast-face detection algorithm is disclosed in US 2002/0102024, hereinafter referred to as “Viola-Jones”, which is hereby incorporated by reference. In brief, Viola-Jones first derives an integral image from an acquired image, which is usually an image frame in a video stream. Each element of the integral image is calculated as the sum of intensities of all points above and to the left of the point in the image. The total intensity of any sub-window in an image can then be derived by subtracting the integral image value for the top left point of the sub-window from the integral image value for the bottom right point of the sub-window. Also, intensities for adjacent sub-windows can be efficiently compared using particular combinations of integral image values from points of the sub-windows.
In Viola-Jones, a chain (cascade) of 32 classifiers based on rectangular (and increasingly refined) Haar features are used with the integral image by applying the classifiers to a sub-window within the integral image. For a complete analysis of an acquired image, this sub-window is shifted incrementally across the integral image until the entire image has been covered.
In addition to moving the sub-window across the entire integral image, the sub window is also scaled up/down to cover the possible range of face sizes. In Viola-Jones, a scaling factor of 1.25 is used and, typically, a range of about 10-12 different scales are used to cover the possible face sizes in an XVGA size image.
The resolution of the integral image is determined by the smallest sized classifier sub-window, i.e. the smallest size face to be detected, as larger sized sub-windows can use intermediate points within the integral image for their calculations.
A number of variants of the original Viola-Jones algorithm are known in the literature. These generally employ rectangular, Haar feature classifiers and use the integral image techniques of Viola-Jones.
Even though Viola-Jones is significantly faster than previous face detectors, it still involves significant computation and a Pentium-class computer can only just about achieve real-time performance. In a resource-restricted embedded system, such as a hand held image acquisition device, e.g., a digital camera, a hand-held computer or a cellular phone equipped with a camera, it is generally not practical to run such a face detector at real-time frame rates for video. From tests within a typical digital camera, it is possible to achieve complete coverage of all 10-12 sub-window scales with a 34 classifier cascade. This allows some level of initial face detection to be achieved, but with undesirably high false positive rates.
In US 2005/0147278, by Rui et al., which is hereby incorporated by reference, a system is described for automatic detection and tracking of multiple individuals using multiple cues. Rui et al. disclose using Viola-Jones as a fast face detector. However, in order to avoid the processing overhead of Viola-Jones, Rui et al. instead disclose using an auto-initialization module which uses a combination of motion, audio and fast face detection to detect new faces in the frame of a video sequence. The remainder of the system employs well-known face tracking methods to follow existing or newly discovered candidate face regions from frame to frame. The method described by Rui et al. involves some video frames being dropped in order to run a complete face detection process.
U.S. Pat. No. 6,940,545 to Ray et al., which is incorporated by reference, describes the use of face detection to adjust various camera parameters including Auto-Focus (AF), Auto-Exposure (AE), Auto White Balance (AWB) and Auto Color Correction (ACC). The detection algorithm employed by Ray et al. is a two part algorithm wherein the first stage is fast, but exhibits a high false positive rate and the second stage is more accurate but requires significantly more processing time.
In particular, Ray et al state that the face detector must operate on an image in a timeframe of less than one second (col. 10, line 57) although they do not specify if this timeframe is for the combination of fast and accurate detectors or is a limit on the fast detector only. Where a detection or combined detection/tracking algorithm is applied to preview images in a state-of-art camera, it typically operates on a timeframe of 20-30 ms in order to be compatible with preview frame rates of 30-50 fps. This is a significantly faster requirement than any capability specified in Ray et al.
Ray et al also describe the use of a “framing image” (e.g. FIG. 3), which may be deemed somewhat analogous to a preview image within a state-of-art camera. However, the concept of tracking face regions from frame to frame within a stream (or collection) of (low-resolution) preview images is not described by Ray et al. Also, U.S. Pat. No. 7,269,292, which is incorporated by reference, contains the concept of tracking a face region within a collection of low resolution images and using this information to selectively adjust image compression.
Disadvantages of the processes described by Ray et al., include the following: First, using a color based fast face detector is actually quite unreliable as many backgrounds and scene objects can be confused with skin colors. Second, the face detector of Ray et al. is applied to an entire scene before any additional processing occurs. This can lead to a time lag of one second plus the time to implement processes such as auto-focus, auto-exposure or auto white balance. Third, Ray et al. only described to adjust camera parameters responsive to a user activating an acquisition, whereas in a practical camera it is desirable to constantly adjust exposure, focus and color balance based on each frame of a preview stream. Fourth, where image acquisition is asynchronous with respect to the preview stream, then Ray et al. apply face tracker processing to the current frame completely before the rest of the method described by Ray et al. is applied.