Motion blur is a significant defect of most current display technologies. Motion blur arises when the display presents individual frames that persist for significant fractions of a frame duration. When the eye smoothly tracks a moving image, the image is smeared across the retina during the frame duration. Although motion blur may be manifest in any moving image, one widely used test pattern is a moving edge. This pattern gives rise to measurements of what is called moving-edge blur.
A number of methods have been developed to measure moving edge blur, among them pursuit cameras, so-called digital pursuit cameras, and calculations starting from the step response of the display. These methods generally yield a waveform—the moving edge temporal profile (METP)—that describes the cross-sectional profile of the blur [1].
Several methods have also been developed to convert this waveform to a single-number metric of motion blur. Examples are the Blur Edge Time (BET), Gaussian Edge Time (GET), and Perceptual Blur Edge Time (PBET) [1]. However, none of these metrics attempts to provide a perceptual measure of the amount of motion blur.
First, none of these metrics takes into account the contrast of the edge, and its effect upon perceived blur. In general, blur becomes less visible when contrast decreases [2, 3], and the apparent width of motion blur declines with reduced contrast [4]. Second, contrast of the edge will mask the visibility of the blur [5, 6]. Thus a model of blur visibility must take into account this masking effect.
The need to incorporate contrast is especially pressing because measurements of motion blur are often made at several contrasts (gray-to-gray transitions) [7, 8]. Those separate measurements must then be combined in some perceptually relevant way.
Finally, none of the existing metrics take into account the visual resolution of the display (pixels/degree of visual angle). For a given speed in pixels/frame, a higher visual resolution will yield a less visible artifact.