Computer animation has become a common feature of our shared media landscape. From cartoons and games to NASA artist conceptions and even courtroom forensic re-creations, computer animation is familiar to everyone and is a large and growing business. The software used to create these animations all have to solve some common problems: generating realistic and pleasing motions, efficiently generating surfaces and meshes, and rendering those surfaces into frames. Although many prior art methods have been implemented for doing these things they all suffer from some common drawbacks.
Prior art surface mesh implementations are hindered by requiring costly destructive edits during animation, and being unable to get high-quality subdivision surfaces by a fast method of approximation that supports polygons of any order. They are also unable to compute the attributes, such as bounding box, of a deformed mesh without performing the deformation. A better mesh implementation would be one that could be edited without having to destroy the original, and would have a method of approximating the limit surface of a subdivision control mesh quickly and with control over edge sharpness.
Animation in the prior art never treats time correctly. Since high-end animation requires that many secondary motions be computed algorithmically based on the primary motions designed by the animator, and since some of these computations require knowing the state of the animation at previous or future times, it's necessary for animation systems to handle these cases in a logical and consistent manner. Instead prior art systems employ ad hoc methods that can be limited or introduce errors from feedback loops. An ideal animation engine would treat time as an intrinsic quantity to fully support time-warping expressions and dynamic simulations. If it made it easier for the animator to insert and manage these secondary animation effects that would be even more ideal.
3D rendering in the prior art is never fast enough. All prior art solutions have problems dealing with large amounts of geometry and getting high quality results from a limited number of render samples. All renders have to trade off quality because they cannot take enough samples or because they spend too long shading the samples needed for good clean edges. What is needed is rendering software that can take more samples and still run faster by using better partitioning of large data sets, and by doing less shading while still getting good anti-aliasing.