CPC G06T 7/593 (2017.01) [G06T 7/73 (2017.01); G06T 7/85 (2017.01); H04N 13/128 (2018.05); H04N 13/194 (2018.05); G06T 2207/10024 (2013.01); H04N 2013/0081 (2013.01)] | 19 Claims |
1. A mesh-based image reconstruction method adapted for reconstructing a synthetic image of a scene from a perspective of a virtual camera having a selected virtual camera position, based on images of the scene captured by at least first and second physical cameras each having a respective physical view of the scene, the method comprising:
obtaining a native disparity map for the first physical camera having a physical view of a scene, the native disparity map containing disparity values stored therein, the native disparity map representing a computational solution to a stereo correspondence problem between the at least first and second physical cameras in a shared epipolar plane; and
utilizing at least the native disparity map for the first physical camera, executing the following:
(A) generating vertex values to form a vertex array, wherein each vertex value has (a) an associated coordinate within the native disparity map for the first physical camera, and (b) a disparity value, wherein the disparity value can be used to re-project the respective vertex value to a new location based on a selected projection function;
(B) executing a selected transformation of the vertex values;
(C) generating per-fragment disparity reference values and two dimensional image space (U, V) coordinate values corresponding to the first physical camera;
(D) generating U, V coordinates for an image obtained by the second physical camera;
(E) executing a weighted summation of samples from camera images captured by each of the first and second physical cameras; and
(F) generating an input to a rasterization depth test;
thereby reconstructing a synthetic image from the perspective of the virtual camera.
|