Image processing and imaging visualization have provided a significant impact on a wide spectrum of media such as animations, movies, advertising, and video games. One area that has benefited greatly from imaging processing and visualization is medical imaging. For medical imaging, volume rendering of medical images has become a standard for many procedures and diagnostics. The visualization of human organs and body regions using volume rendering color capabilities may be used in several medical imaging modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI).
Current medical imaging scanners provide three-dimensional (3D) submillimeter resolution that allows for various 2D and 3D post-processing and visualization techniques with excellent image quality. For example, advances in volume rendering are able to provide a physical rendering algorithm that simulates the complex interaction between photons and the scanned anatomical image to obtain photo-realistic images and videos. Cinematic volume rendering facilitates a photo-realistic image with seemingly real-life ambient and light effects which suggest to the human eye that this is “real”. Additional features include high-performance rendering and highly advanced camera techniques, such as variable aperture diameters and motion blurring. Three-dimensional volume rendering capabilities may be used for visualization of the complex internal anatomy of a patient. Furthermore, volume rendering techniques offer relevant information for pre-operative planning as well as post-operative follow-up.
However, one drawback to providing high quality images such as for cinematic volume rendering is that the volume rendering process requires additional resources and may not be efficient or fast. One particular area that has difficulties is the process of animation of transition between two images. Due to the number of rendered frames requires (e.g. 15, 30, or 60 frames per second), animation is challenging to perform for high resolution visualization.
There are two broad categories for implementing animations, algorithmic and user-driven. For algorithmic animation, computer algorithms drive the animation, e.g. a physics engine computing rigid body dynamic for moving objects, automated camera path generation from detected landmarks in the data, automated visual abstractions of the data, data-driven lighting design, etc. Machine learning-based approaches play an important part in this category (e.g., artificial intelligence driven crowd simulations). An example from medical visualization is the automatic creation of 360° turntable animation from a user-specified rendering preset. For user-driven animation, the modeling application provides task-specific tools to design, preview and render computer animation. The animation specification can be low level, e.g., specifying each parameter on the animation timeline by drawing curves, or a higher-level specification of visual effects. Examples include the major 3D modelling applications (e.g., Maya, Inventor), 3D engines (e.g., Unity 3D), and many specialized visualization packages (e.g., Paraview).
In terms of the visualization, the existing approaches generally involve image-based blending, where images that vary a certain parameter of the visualization preset are rendered independently and then blended together. The resulting visualization is not correct for object movement or direct volumetric rendering. Additionally, in many cases, the existing animation systems do not support smooth transition between complex rendering parameters, such as the transfer function and (for image-based lighting) light probes. The system may require manual user intervention to resolve visual artifacts in the rendered videos, or the systems may employ algorithms that are too computationally expensive for many applicable uses.