The present invention relates to the field of computer graphics, and in particular to methods and apparatus for creating motion blur effects.
Many computer graphic images are created by mathematically modeling the interaction of light with a three dimensional scene from a given viewpoint. This process, called rendering, generates a two-dimensional image of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene. Animated sequences can be created by rendering a sequence of images of a scene as the scene is gradually changed over time. A great deal of effort has been devoted to making realistic and/or aesthetically pleasing rendered images and animations.
Previously, computer graphics rendering used either analytic or sampling-based techniques to determine the attribute values of pixels of an image from three-dimensional scene data. Analytic techniques attempt to determine the exact contribution of scene data to the attribute value of a pixel. For example, analytic anti-aliasing techniques attempt to determine the exact coverage of a pixel by a polygon or other geometric element. One type of analytic anti-aliasing technique determines the convolution integral of a pixel filter kernel and a polygon or geometric element partially or entirely covering the pixel.
Analytic rendering techniques provide high quality results without aliasing or other artifacts. However, analytic rendering techniques are often very time-consuming and mathematically difficult to process.
Additionally, many rendering effects, such as motion blur, depth of field, soft shadowing, complex illumination, refraction, and reflection are impractical to perform using analytic techniques for typical computer graphics scenes.
As a result of the difficulties with analytic rendering techniques, sampling-based rendering techniques are predominantly used to render images. Sampling-based techniques determine attribute values from the three-dimensional scene data at discrete points in space and/or time. These attribute values are then combined to determine the attribute values of pixels of the image.
One example of sample-based anti-aliasing divides each pixel into a plurality of sub-pixels. The attribute values of the sub-pixels are determined by sampling the three-dimensional scene data and then combined to determine the attribute value of the pixel.
In another example, motion blur is a phenomenon resulting in the apparent streaking or blurring of rapidly moving objects. Motion blur occurs in still and motion picture photography as objects move relative to the camera during the period of exposure determined by the camera's shutter. Motion blur may occur because objects move, because the camera moves, or because of combinations of both object and camera movement.
An example of sample-based motion blur effects assign a different time value within a shutter time period to each pixel or sub-pixel. Renderers and other computer graphics applications previously simulated motion blur effects by specifying a “shutter time” for a virtual camera. For each frame of animation, the renderer evaluates the motion path or other animation data of an object at different discrete times within the shutter time interval to determine several different positions or poses of the object over the shutter time interval. The renderer then renders at least portions of the object at the different discrete times within the shutter time interval to create a motion blurred image. Thus, different pixels or sub-pixels “see” the scene at different times, producing a motion blur effect.
An example of sample-based depth of field effects assign different lens aperture positions to each pixel or sub-pixel, so that the pixels or sub-pixels “see” the scene at different point of view, producing a depth of field or focus effect.
Example sample-based illumination effects select multiple different discrete points within a light source for each illuminated portion of a scene to determine the illumination from this light source. This produces soft-shadowing effects. Example sample-based reflection and refraction effects select multiple different discrete points within a scene for each reflective or refractive portion of a scene to produce reflection or refraction effects.
Each of the sample-based effects requires a large number of samples to operate effectively. When using multiple effects, the number of samples required to render a pixel increases substantially. Furthermore, sampling requires that the data source be bandlimited in frequency to prevent aliasing. Unlike a one-dimensional signal or a two-dimensional image, filtering four-dimensional scene data (three spatial dimensions and time) is impractical. Thus, aliasing effects often occur, which must be minimized by increasing the number of samples, which increases the computational resources required for rendering, or using stochastic sampling, which hides aliasing at the expense of increasing noise.
For example, temporal artifacts, such as aliasing and noise, are one problem with sample-based motion blur rendering techniques. Temporal aliasing and noise artifacts in motion blur occur because the motion of the object is always rendered or sampled at discrete moments of time. If the number of samples per frame is less than the spatial-temporal variation due to the object and its motion, then aliasing visual artifacts can occur. For example, if a small object moves at a speed of one pixel or sub-pixel image sample per frame, then object's motion will be synchronized with the motion blur sampling. This will cause a flickering or beating visual artifact as the object is consistently sampled at the same place on the object at the same time interval for each frame. Distributing samples randomly or pseudo-randomly in space and/or time reduces aliasing artifacts, but introduces noise artifacts. Noise artifacts may increase as the number of dimensions of evaluation increase.
Aliasing artifacts are a common problem when applying motion blur to rotating objects. When an object is rotating, often some point along its radius will be moving at the critical speed that causes aliasing. Aliasing artifacts are also a common problem when applying motion blur to fast moving objects. If an object travels at a rate of 100 pixels across an image per frame, the motion blur effect should appear as a 100 pixel long streak. However, because the renderer samples the motion of the object only a small number of times, for example four time samples per frame, the image of the object will appear as a sparse set of points, rather than a continuous streak.
A prior approach to reducing temporal artifacts is to increase the number of samples. For example, to decrease temporal aliasing in sample-based motion blur effects, the number of different sample times used by the renderer to evaluate the motion of an object within the shutter interval of each frame is increased. However, increasing the number of sample times greatly increases the time, memory, and computational resources required for rendering. Additionally, regardless of the number of sample times used, there may be still be some types of object motion that create temporal artifacts.
In summary, prior analytic rendering techniques provide accurate and high quality visual output, but tend to be very time-consuming and mathematically difficult to process, especially for complex scenes using many different rendering effects together, such as anti-aliasing, motion blur, depth of field, and illumination effects. Sampling-based techniques are mathematically tractable for rendering complex scenes with many different rendering effects, but are prone to artifacts such as aliasing and noise. Increasing the sampling rate reduces these artifacts; however, this increases the time and computational resources required rendering. Moreover, regardless of the sampling rate, there may still be artifacts such as aliasing.