Synthetic videos are useful in a variety of applications. For example, synthetic videos are used in tutorials, motion pictures, video games, public displays (e.g., airport safety videos), training videos, and other applications. Synthetic videos may be used to protect confidentiality, to portray situations too dangerous to film in the real world, to portray impossible situations (e.g., epic fantasy scenes), to reduce film production costs, or to otherwise meet video needs when live-videos are inadequate.
Conventional approaches to generating synthetic videos include using machine learning models (e.g., generative adversarial networks (GANs), convolutional neural networks (CNNs), or the like) to create a sequence of images that comprise a video. Some synthetic videos may be created by filming live action and mapping synthetic features onto the live action (e.g., altering physical appearance from a human to a non-human creature, or mapping synthetic objects onto real-world moving props). These methods of producing synthetic videos require tightly controlled filming conditions, specialized equipment, and involve many people, leading to high production costs and long production times.
To address these problems, fully synthetic (computer generated) videos may be created. However, fully synthetic videos often suffer from unrealistic motion, including object distortion, or unnaturally abrupt (jerky) motion. For example, fully synthetic videos that depict a person walking may result in unrealistic arm, leg, torso, or head movements, or may result in distortions of facial features. In some systems, unrealistic motion arises because models used to generate motion are not based on underlying properties of motion and/or are not based on real videos.
Therefore, in view of the shortcomings and problems with conventional approaches to synthetic video, there is a need for improved, unconventional systems that are low-cost, rapid systems to generate synthetic videos that portray realistic motion.