1. Technical Field
The present invention relates to a method and system for generating caricatured images of subjects based on a closed-group of subjects. In particular the invention is concerned with adaptively generating caricatures in dependence on subjects joining and leaving the closed group.
2. Related Art
Automatic caricaturing methods and systems are already known in the art. Brennan, S. E. in “Caricature Generator: The Dynamic Exaggeration of Faces by Computer.” Leonardo, Vol.18 no.3, pp. 170-178 describes a computational model of caricature which allowed a two dimensional line drawn caricature to be generated from photographs. The user traces over the original image (by placing a set of markers over the image) to generate a line drawing of the subject. An example of such an original image and the resulting line drawing are shown in FIG. 15. Here, an original image as shown in FIG. 15(a) results in a line drawing as shown in FIG. 15(b).
Having obtained the line drawing of the subject, this drawing is then compared with a corresponding line drawing of a “mean” or “prototype” face, by which is meant an average face of a group usually comprising the same race, gender, and colour as the subject. Thus, for the white Caucasian male shown in FIG. 15(a), usually a prototype face of an “average” white Caucasian male would be used. In some circumstances prototype faces from different ethnic groups may be used.
Rowland et al in “Transforming Facial Images in 2 and 3-D”. Imagina 97—Conferences—ACTES/Proceedings, Feb, Monte Carlo, (1997), pp159-175 describe how a prototype face may be derived as follows. A prototype can be defined as being a representation of the consistencies across a collection of faces. For example, a prototypical male Caucasian face would contain all that is consistent about Caucasian faces and can be generated by calculating a mean face from a set of Caucasian faces.
To derive the prototypical shape for a group of faces, the delineation data for each face are first “normalised”, making the faces nominally of the same size and orientation. The left and right pupil centres provide convenient landmark points for this process. The first step is to calculate the average left and right eye positions for the whole population. The next step is to apply a uniform translation, scaling, and rotation to the (x, y) positions of all the feature points, thus normalising each face to map the left eye to the average left eye position and the right eye to the average right eye position. This process maintains all the spatial relationships between the features within each face but standardises face size and alignment. It is then possible to calculate the average positions of each remaining template feature point (after alignment), the resulting data constituting the mean shape for the given population. A line drawing of the resulting “mean” or prototype face can then be obtained. An example line drawing of a mean face is shown in FIG. 16.
Once a prototype has been formed for a collection of faces it is possible to generate caricatures by accentuating the difference between an individual face and a relevant prototype. After normalising the feature location data from the prototype to the eye positions of an example face, all feature points on the example face can be shifted away from their counterparts on the prototypical face by a given percentage, as shown in FIG. 9. This percentage is the amount of caricature and can be thought of as extrapolating a morph between a prototype and the example face. If the percentage is 100% then the product of the manipulation will be the prototype, if the percentage is 50% then the result will be halfway along the morph between the prototype and the example face, if the percentage is 0% then the example face is returned, if it is −50% then a caricature of the original face is the result. More generally, any percentage less than 0% will result in a caricatured face. Equation 1.1 expresses Brennan's caricaturing algorithm in a mathematical form.Q=P+b(P−S),  (Equation 1.1)Where:    Q is the feature point of the caricature model,    P is the feature point of the individual head model,    S is the feature point of the average mean head model and    b is a coefficient for caricaturing.
An example caricature image of the line drawing of FIG. 15(b) generated according to the Brennan algorithm, and using a caricature coefficient b of 20% is shown in FIG. 17.
Benson and Perrett in “Synthesising continuous-tone caricatures.” Image and Vision Computing, 9, 123-9 extended the technique to produce photographic caricatures using computer-morphing techniques. However the basic underlying caricaturing algorithm is the same as Brennan. Moreover the same caricaturing technique is easily extended into 3D by applying the caricaturing algorithm to every vertex or to 3D-mesh control points, as described in Fujiwara, T., Nishihara, T., Tominaga, M., Kato, K. (1998) “On the Detection of Feature Points of 3D Facial Image and Its Application to 3D Facial Caricature.” and Shadbolt, A. (2003) “From 2D photographs to 3D caricatures.” http://www.dcs.shef.ac.uk/˜u0as2 respectively. When the caricaturing algorithm is applied to the control points of the mesh the resulting caricatured control points are then interpolated over the rest of the mesh to produce the 3D caricature. FIG. 10 shows a 3D-caricature (right) produced from the original 3D head model (left) and the mean model.
With respect to other aspects of computational caricatures, it has also been shown in Rhodes, G. & Brennan, S. E. (1987). Identification and Rating of Caricatures: Implications for Mental Representations of Faces. Cognitive Psychology, 19, 473-497 that caricaturing of faces results in greater recognition of the caricature face as the subject than an un-caricatured face. An example of this is given in In the Eye of the Beholder—The Science of Face Perception, Bruce, V. and Young, A. 2000, ISBN 0-190852439-0 at p.121-123, where photographs of two identical twins are manipulated to provide an “average” image of the two photographs, and differences identified between the “average” image and the actual photographs. Further image manipulation is then performed in dependence on the identified differences to exaggerate the differences, thus highlighting the differences which are available for their friends and families to learn so as to be able to identify each twin. In effect, within this work a closed group of subjects (the two twins) is formed, a mean of the closed group is taken, and then the images of the subjects are caricatured away from this mean so as to render the images more recognisable. The precise image manipulations used to perform the caricaturing are described in Chapter 5 of the book.
Although the above work by Bruce and Young introduces the concept of the formation of a mean image from a closed group, and the exaggeration of images of members of the group away from that mean so as to render them recognisable, problems remain in practical implementations of such techniques where the members of the group may change over time.