1. Field of the Invention
The present invention relates to an image processing apparatus and image processing method and, more particularly, to an image processing apparatus and image processing method for playing back a moving image including a plurality of time-divided frame images.
2. Description of the Related Art
Generally, a human color perception model is designed to enable correct prediction of human color perception. A typical example is a color appearance model. The color appearance model copes with changes of viewing conditions. Since viewing information such as luminance, white point, and ambient relative luminance can be set as viewing condition parameters, the color appearance model can appropriately reproduce color appearance even under different viewing conditions.
An image appearance model is an extended color appearance model and reflects the human spatial visual characteristic and temporal visual characteristic. A known typical example is iCAM. Particularly, to predict the appearance of a moving image using the image appearance model, it is necessary to reflect the temporal visual characteristic such as light adaptation or color adaptation.
A technique of applying an image appearance model (iCAM) to a moving image is described in, for example, M. D. Fairchild and G. M. Johnson, “Image appearance modeling”, SPIE/IS&T Electronic Imaging Conference, SPIE Vol. 5007, Santa Clara, 149-160 (2003). This reference describes the following technique. First, low-pass images and Y low-pass images (absolute luminance images) from a frame image 10 sec before to a frame image of interest are multiplied by an adaptation weight calculated byAW(f)=0.000261e0.000767f+0.0276e0.0297f  (1)where AW is the calculated weight, and f is the frame number. More specifically, when the number of frames per sec is 30, f=0 represents the current frame number, and f=−300 represents the frame number 10 sec before.
The plurality of weighted low-pass images and Y low-pass images are composited to generate the low-pass image and Y low-pass image of a frame of interest. These images are used in the framework of iCAM, thereby playing back a moving image while reflecting the spatial and temporal visual characteristics.
A general moving image includes a plurality of objects. The objects have different features, for example, different drawing times. When viewing a moving image played back, there is normally an object that should receive attention (object of interest).
The technique described in the above reference performs uniform appearance processing in an overall frame image. Since the temporal visual characteristic and the like are uniformly reflected on the overall frame image, it is impossible to apply the temporal visual characteristic to each object of interest.
An example will be described with reference to FIG. 12. Referring to FIG. 12, assume that an observer is continuously viewing a car A from the start of 0 sec, which is 10 sec before the time of iCAM application. In this case, the car A in the frame image 120 sec after is close to complete adaptation, and the adaptation weight for the car A has a high value close to 1. However, according to the method of non-patent reference 1 described above, since the temporal visual characteristic is reflected on the overall frame image, and the position of the car A moves, the adaptation weight has a low value. Referring to FIG. 12, when a car B is the object of interest, the above-described conventional method applies an adaptation time longer than the actual observation time of the car B. For this reason, the adaptation weight for the car B has a high value.