This application claims the priority of Korean Patent Application No. 2001-78963, filed Dec. 13, 2001 in the Korean Intellectual Property Office, which is incorporated herein in its entirety by reference.
1. Field of the Invention
The present invention relates to image processing, and more particularly, to a method and an apparatus for generating textures required for mapping a two-dimensional (2D) facial image to a three-dimensional (3D) facial model.
2. Description of the Related Art
Since faces have many curves and people easily catch face differences by subtle changes thereon, 3D facial modeling is very difficult. Generally, 3D facial models are generated and used in two different fields.
First, 3D facial models are generated for movies. In this case, the facial models must have excellent quality but do not require real time generation. Accordingly, the texture quality of 3D facial models can be improved only by increasing the size of the texture for the facial models.
Second, 3D facial models are generated for games or mobile devices. Here, mobile devices generate 3D facial models by using a limited amount of resources and generate real time animations, for example, avatars using the facial models. Although game devices or computers process facial images at an improved speed while not limiting resources as compared with mobile devices, due to technology improvements thereof, the game devices or the computers cannot but use a limited amount of resources because the game devices or the computers have to generate real time models as compared with the first field.
A first conventional method for generating textures for a 3D facial model uses a simple cylindrical coordinates conversion process. In this case, the textures corresponding to a 3D facial image are only one eighth of the overall textures of a 3D head image. In other words, a facial image occupies about 50% of a head image in a vertical direction and about 25% of a head image in a horizontal direction. Since people catch differences between facial models via the portions around the eyes in the facial models, the first conventional method wastes textures corresponding to unnecessary portions among entire textures.
A second conventional method for generating textures compresses the textures used for generating a 3D facial model. Therefore, the second conventional method requires a separate apparatus for recovering the compressed texture before using them.
A third conventional method for generating textures reduces the absolute sizes of textures when the amount of resources is limited. Accordingly, the third conventional method deteriorates the quality of the textures for a 3D facial model.