The present invention generally relates to simulations, virtual world simulations, of the real-world or real-life, and more particularly systems and methods for creating virtual objects and avatars.
Activities in the virtual space are increasing due to advance in computers and the World Wide Web. More and more commercial transactions and customer decisions are made in the virtual sphere rather than through real person-to-person interaction. In fact, e-shopping has become a preferred way of purchasing goods for many consumers. In many instances, and once a customer has selected a desired product, the customer will validate its selection by physically examining the product in a store where the customer is able to look at the product through various perspectives. This results from a lack of realistic representation of the product in the virtual share.
There is accordingly a need to improve virtual representations of objects to reduce the need for physical examination in the real world. One of the main limitations of the virtual space is that three-dimensional objects are represented via bi-dimensional display screen images. Three-dimensional point cloud models are particularly suited to add a third dimension to these bi-dimensional images. Perspective viewing algorithms associated with such three-dimensional point cloud models already exist. Using these, a given object may be represented by a point cloud three-dimensional model and various algorithms may be used to manipulate such models.
Point cloud three-dimensional models may be generated using a number of three-dimensional scanning techniques. One approach described by Hu and al. in U.S. Pat. No. 9,230,325 is photogrammetry using geometrical topography triangulation. Two or more cameras with different direct perspectives can be used to obtained stereoscopy as described by Peuchot in French Patent No. 2,986,626. Alternatively, stereoscopy can also be obtained by using mirrors and reflective surfaces in the viewing path as described by Tanaka and al. in U.S. Patent Application Publication No. 2013/0335532. Patterned or structured light can also be projected onto the object to obtain depth information and ease up photogrammetry multiple perspectives reconstruction as described by Schneider and al. in U.S. Pat. No. 9,228,697. Other approaches may also be used, such as sonar or lidar time of flight based techniques, where delay of echoes are used to calculate the distance between the reflecting surface and the capture device, as described by Meinherz in U.S. Pat. No. 9,332,246. Using the above techniques, the three-dimensional models are generated directly based on the set of captured points, thereby incorporating any measurement error and/or aberration into the three-dimensional model itself. The quality of the model is therefore directly related to the quality of the acquired points. In addition, these generalist approaches do not used pre-knowledge to evaluate the relevance of the acquired points. For example, when building a three-dimensional model of the object, the background is also acquired and must be erased using post-processing techniques.
With users spending more and more time in the virtual space, there is also a need for realistic representations of individuals, including individuals' faces, also called avatars. The three-dimensional scanning techniques described above can also be used to generate personalized avatars in specific poses. To interact in the virtual space, these personalized avatars must be able to move and express themselves. There is accordingly also a need to derive other personalized characteristics from the individuals. Sareen and al. disclose in U.S. Patent Application Publication No. 2016/0247017 how to compute biometric parameters from acquired avatars. Davis proposes in U.S. Patent Application Publication No. 2013/0257877 to build a general look-up table to store captured characteristics, from which a personality characteristics library is derived. This very generic approach is applied for avatars mimicking real protagonists during an online interaction. Similarly, Goodman and al. propose in U.S. Pat. No. 8,970,656 to use facial expression mimicking avatars library for video conferencing. Evertt and al. propose in U.S. Pat. No. 9,013,489 to animate an avatar overlaying a stick figure mimicking a stereo-filmed person. These approaches can offer a certain level of personalization by reproducing visually-observed poses and expressions from a library however they are unable to realistically reproduce motions and expressions of a person without having the specific motions and expressions in the library.
There is accordingly a need for virtual object creation systems and methods that do not exhibit the above shortcomings. The present invention is directed to a virtual object creation system which generates point cloud three-dimensional models that extend beyond the directly acquired visual information by using weighted average and residue minimization computer programs. The virtual object creation system of the present invention is also directed to specific articulation modeling between portions/sub-portions of the model and physical properties particularly suited for avatar animation, with specific parameterization derived from poses protocols during image acquisition.