1. Field of the Invention
The present invention relates to a method for the pre-operative prediction of a body or a part of a body, e.g., the face, after surgery. The invention also relates to a planning system wherein the method can be applied.
2. Description of the Related Technology
In maxillofacial and plastic surgery or dermosurgery parts of the body, such as the skull, dentition, soft tissues or skin patches, are surgically remodelled or restored. An example is orthognatic surgery, in which the relation of the jawbones is adjusted. Another example is breast augmentation, in which the breasts are enlarged using breasts implants.
Generating realistic images (e.g., of faces) has been a central goal in three-dimensional (3D) shape acquisition, animation and visualisation.
3D Acquisition
Several methods exist to acquire a 3D geometric description of (a part of the body. Well-known are the medical imaging modalities, such as CT and MRI, and 3D photographic systems. The latter can be subdivided into two categories, i.e., those using active methods, which project a specific pattern on the body, and those using passive methods, which acquire a 3D geometric description of the body from one or more images and illumination conditions, with or without the use of a priori geometric knowledge. Simultaneously with the 3D geometric description, 3D photographic systems deliver the texture of the body, which is used to render the 3D surface.
Animation
Several methods exist to animate a 3D body shape. Motion simulation can be based on heuristic rules, physics-based knowledge, or it can be image-derived (e.g., building a statistical deformation model based on a set of images from different persons and/or expressions). The result can be natural or artificial. For example, the facial motion of one person can be used to drive the facial motion of another person.
Visualisation
3D visualization or rendering uses a texture map and a reflectance model of the (part of the) body.
Texture mapping refers to a computer graphics technique wherein a texture image (or texture map) is applied to a polygonal mesh or some other surface representation by coupling the texture image (or texture map) (with associated colour/gray value) to the 3D surface. The result is that (some portion of) the texture image is mapped onto the surface when the surface is rendered.
Texture is derived from one or more 2D or 3D photographs of the body. When using a 3D photographic system, a texture map is typically delivered simultaneously with the 3D shape description.
when using 2D photographs, a method to match or register these 2D photographs with the 3D surface description is needed. Matching can be done based on a set of corresponding points, or on a metric (e.g., mutual information) that expresses the correspondence between 2D-image-derived features and 3D-shape-based properties.
The model of body reflectance can be based on skin or skin-like diffuse and specular (mirror-like reflection) properties.
2D visualization has been used to show (a part of) the body under simulated or artificial illumination conditions and for animation by morphing (part of) the body. In these applications, photo-realism is the primary concern.
The following documents relate to the subject-matter described herein.    ‘Computer-assisted three-dimensional surgical planning and simulation’, J Xia et al, 3D color facial model generation, Int J Oral Maxillofac Surg, 29, pp, 2-10, 2000,    ‘Computer-assisted three-dimensional surgical planning and simulation: 3D soft tissue planning and prediction’, Xia et al, Int J Oral Maxillofac Surg, 29, pp. 250-258, 2000,    ‘Three-dimensional virtual reality surgical planning and simulation workbench for orthognathic surgery’, Xia et al, Int J Adult Orthod Orthognath Surg, 15(4), 2000,    ‘Three-dimensional virtual-reality surgical planning and soft-tissue prediction for orthognatic surgery’, Xia et al., IEEE Information Technology in biomedicine 5(2), pp. 97-107, 2001,    ‘Fast Texture mapping of photographs on a 3D facial model’, Iwakiri et al, Proc Image and Vision Computing New Zealand 2003, November 2003, Palmerston North, New Zealand, pp. 390-395.The methods of Xia et al. and of Iwakiri et al. use a set of photographs comprising a frontal (0° view), right (90° view) and left (270° view) photograph of the patient, which are projected as a texture map onto the 3D head mesh obtained from CT for 3D visualization.