US 12,169,889 B2
Enhanced system for generation of facial models and animation
Hau Nghiep Phan, Montreal (CA)
Assigned to Electronic Arts Inc., Redwood City, CA (US)
Filed by Electronic Arts Inc., Redwood City, CA (US)
Filed on Jun. 10, 2021, as Appl. No. 17/344,618.
Prior Publication US 2022/0398797 A1, Dec. 15, 2022
Int. Cl. G06T 13/40 (2011.01); G06N 3/08 (2023.01); G06T 17/20 (2006.01); G06V 20/40 (2022.01); G06V 40/16 (2022.01)
CPC G06T 13/40 (2013.01) [G06N 3/08 (2013.01); G06T 17/20 (2013.01); G06V 20/46 (2022.01); G06V 40/168 (2022.01); G06V 40/174 (2022.01)] 17 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
accessing a machine learning model trained based on a plurality of two-dimensional images of one or more real-world persons, three-dimensional facial meshes of the one or more real-world persons, two-dimensional texture maps corresponding to the three-dimensional facial meshes, a predefined set of facial expressions, and identity information associated with the real-world persons, wherein each of the texture maps is a two-dimensional image that maps to topography of the corresponding three-dimensional facial mesh, wherein the machine learning model is trained to generate, via a latent variable space, two-dimensional texture maps based on two-dimensional images of a person;
obtaining one or more images depicting a face of a first real-world person and first identity information of the first real-world person;
encoding the one or more images in the latent variable space;
generating a first predefined set of two-dimensional synthetic expressions of the first real-world person based on the one or more images, wherein the first predefined set of two-dimensional synthetic expressions correspond to the predefined set of facial expressions;
generating, using the machine learning model, a first set of two-dimensional texture maps of the first real-world person based on the one or more images and the first identity information, the first set of two-dimensional texture maps including a diffuse texture map and a normal texture map for each expression of the first predefined set of two-dimensional synthetic expressions;
accessing a second machine learning model trained to generate a three-dimensional facial mesh based at least in part on identity information of a real-world person and a corresponding set of two-dimensional texture maps; and
generating, by the second machine learning model, a first three-dimensional mesh of the face of the first real-world person based at least in part on the first identity information and the first set of two-dimensional texture maps.