1. Field of the Invention
The present invention relates to a view synthesis technology, and in particular to a view synthesis method capable of depth mismatching checking and depth error compensation.
2. Brief Discussion of the Related Art
The view synthesis technology is derived from a Depth-Image-Based-Rendering (DIBR) technology. In real life viewing, when a viewer facing objects in a scene, as he moves along a short distance, he may find the change of relative position between him and the object located at a large distance, is much small than the change of relative position between him and the object located at a small distance. The view synthesis makes use of this technology to simulate and obtain a middle frame. Since the relative movement of a foreground object is much more than that of a background object, based on this principle, if depth information is obtained, then it can be derived to know the amount of change of relative positions for an object on a left and right frames. And based on this change amount, each object having different depth value could have different warping result, so as to simulate and obtain a pseudo view frame, as shown in equation (1) below:I(i,j)=(1−w)*IL(i+w*d,j)+w*IR(i−(1−w)*d,j)  (1)
The equation (1) above indicates the operations of first performing warping computation respectively for the original left and right maps, then combine the results together to synthesize and obtain a pseudo view frame. In equation (1) above, (i, j) represents coordinate of an object on the map; w represents view weight parameter, with its value from 0 to 1, and when it is close to 1, that means the view is closer to the right real camera; Iw, IL, IR represent respectively the pseudo frame, the left frame, and the right frame; Iw,R,L (i, j) represents the pixel value of coordinate in the left and right maps; and d is the disparity value computed from the depth value, that means the difference of relative positions of the same object in the left and right frames.
In general, a complete view synthesis algorithm is composed of two parts of computation: warping computation, and Hole Filling. Wherein, the warping computation is used to change the relative positions of the objects on the frame based on disparity value and view weight parameter. The appearance of dark portion caused by warping computation indicates the portion originally blocked by the foreground, and is referred to as Occlusion Region or Holes. This Occlusion Region usually will disappear after warping and blending of the original left and right maps. The reason for this is that, most of the portions blocked by the foreground in the left map can be found in the right map, as shown in FIGS. 1(A) and 1(B). FIG. 1(A) shows the frame of the original view, and its related depth map; while FIG. 1(B) shows the frame after warping and its related depth map. In theory, after blending operation, no holes will appear any longer; however, in fact, in addition to being generated by the occlusion regions, a major reason for the appearance of holes is due to depth value error, or minute differences of depth values for different parts of an object. In this condition, the holes can not be filled out completely through blending, therefore, Hole Filling has to be performed to refine the pseudo frame to be close to a real image frame.
Presently, there are various ways of filling the holes, the basic approach is to fill the holes with pixel values of its adjacent background (minimum depth). By way of example, the major reasons of hole generation is that, after warping computation of frame, the disparities of foreground and background are different, thus leading to the result that the originally occluded portions are displayed, that is usually blocked from being displayed. That portion is the background portion, of which the depth value is relatively small, therefore, the pixels nearby the hole having small depth values are used to fill up the occlusion region. Finally, after filling up the holes, a middle pseudo frame will be generated.
Since the view synthesis technology is very sensitive to the quality of depth map, once an error occurs in the depth map, it will be transmitted to the pseudo frame, to make the frame deformed or unnatural. Therefore, the present invention proposes a view synthesis method capable of depth mismatching checking and depth error compensation, to overcome the afore-mentioned problems of the prior art.
Therefore, presently, the design and performance of the view synthesis technology is not quite satisfactory, and it has much room for improvements.