Mesh generation based on a video signal usually involves several digital cameras, each one recording a signal. Then a computer analyzes the video signals to interpolate the depth of every point in space and generate a three dimensional geometry representing the player being recorded. Unfortunately, most of today's algorithms may be computationally expensive to implement and most may be prone to errors, since the video signals may be too complex to analyze in many situations and generate real-time realistic rendering of objects.
A new generation of cameras have become available that obtain depth information directly from what is referred to as “depth sensor” in the camera. Microsoft Kinect, for example, is one of these new generation cameras. Based on infrared technology, Kinect computes the depth of the objects in its field of view, just as a regular camera would capture the color of the objects in its field of view and outputs a two dimensional grid of points. Moreover, this next generation of cameras have a lower cost than the prior generation.
The construction of a three dimensional mesh from a two dimensional depth signal is relatively easy and less prone to errors provided there is a quality video signal. One skilled in the art will note that the signal from the new generation cameras is usually noisy, both spatially and temporarily, and its quality depends on several external conditions. For the present specification, noise in a signal is the random variation of depth information, produced by the sensor and circuitry of the camera. Previous implementations have been unable to deliver a quality good-looking and stable mesh from next generation cameras.
Moreover, video games often use avatars as a representation of the player in the game. Recently, new hardware in consoles has allowed games to move the avatar or make it react to the actual movements of the player, introducing augmented reality in such applications. A full three dimensional representation of the player has generally been too expensive and not visually acceptable to appear in games.
Hence, it would be beneficial if a real-time realistic rendering of objects could be generated from the new generation of depth cameras, at a reasonable cost.