(1) Field of the Invention
The present invention relates to an image defocusing apparatus capable of simulating a real lens's focusing effects on an image formed by computer graphics, and to a method thereof.
(2) Description of the Related Art
The rapid progress in the field of computer graphics makes it possible to synthesize an image formed by the computer graphics into a video taken by a video camera. The synthetic image, however, will be unrealistic unless a proper depth distance is taken into account, for example, by a process called "defocusing".
With the "defocusing", an object in focus appears clear and sharp while the front and rear of the object, or out-of-focus points, are fuzzy and blurred thereby providing a resulting image in a similar manner as seen through a camera. (Rendering an image by modeling an object with a boundary box is a known art and the explanation thereof is omitted herein.)
Typically, a method called "ray-tracing" is used for the "defocusing". However, the "ray-tracing" or other applicable methods involve a considerable amount of computation, and thus the entire process takes quite a long time. To solve this problem, a simpler, faster method has been disclosed in Japanese Laid-Open Patent Application No. 63-259778. In this method, a region of defocus for each sample point is determined with their respective Z values that represent a distance from a view point to each sample point, and a focal point of a lens being used; the intensities within the region of defocus are averaged to update the original intensities with the mean intensity.
A more detailed explanation will be given by referring to FIG. 1, which shows the relation between the Z values and regions of defocus relative to the focal point. Here, a view point is fixed at z=0 coordinate, and several points, Za, Zb, Zc(focal point), Zd, and Ze, are fixed along the z coordinate, the direction in which an eye ray is aimed. Assume that Z is the z coordinate of the sample point, i.e., the Z value, then the region of defocus of the sample point is determined as follows:
______________________________________ Conditions Region of Defocus ______________________________________ 1) Zd .gtoreq. Z &gt; Zb none 2) Ze .gtoreq. Z &gt; Zd or Zb .gtoreq. Z &gt; Za 3 .times. 3 3) Z &gt; Ze or Za .ltoreq. Z 5 .times. 5 ______________________________________
In the drawing, a square 501 represents a pixel corresponding to the sample point, while squares 502, 503 the regions of defocus under the conditions 1), 2), respectively.
Given the region of defocus, either an arithmetic mean or a weighted mean of the intensities is computed, and the original intensities of all the pixels within the region of defocus are updated by the mean intensity. Since the area of the region of defocus is proportional to a distance from the focal point, a viewer can see the depth distance in the resulting defocused image.
By exploiting the mean intensity as has been described, the amount of the computation is reduced, but only at the expense of degradation in image quality.
For example, when a model shown in FIG. 2A--a red object with a white background placed in the boundary box--is rendered to an image shown in FIG. 2B and thence to a defocused image, the result will be the one shown in FIG. 2C.
This is because the mean intensity of the region of defocus a2 for the sample point a1 naturally includes the red intensity since a2 overlaps the pixels on the object's image. This causes the edge of the object to appear fuzzy and blurred, or the object to appear larger than it should in the image: a phenomenon that never occurs when seen through a camera. A similar phenomenon does occur even when the object is out of focus as long as there is the depth distance in the model.