Computer applications of graphics and especially imaging necessitate heavy computing which is costly both in terms of hardware requirements and computing time.
Engineering applications of computer graphics such as photogrammetry aim to construct a "scene model" derived from reference images representing a geometrically precise model of the real world, which allows accurate synthesis of images of the real world from arbitrary points in space to be generated. In such procedures, measurements of geometric parameters relating to the scene and camera at time of acquiring the "model images" or "reference views or images" are used to extract the parameters for the model. An image, such as is seen on a video screen, is a two dimensional (2D) projection of a scene in the real (physical) three dimensional (3D) world.
All the mathematical entities describing the space and projections thereof, are derived from images and correspondences between images. Correspondences between images, that is, coordinates in the image planes of projections of the same point in a scene, are determined by finding (manually or by computerized methods) distinctive points in objects represented in one image and locating the points of the same objects as represented in the other images of the same scene. An illustrative method for finding correspondences between images is described in B. D. Lucas and T. Kanade. (An iterative image registration technique with an application to stereo vision. In Proceedings IJCAI, pages 674-679, Vancouver, Canada, 1981). FIG. 1A shows an image of a triangle having three apices, (m1,m2,m3), whereas FIG. 1B shows the same triangle viewed at different aspect with corresponded points m1', m2', m3'. Warping is used in computerized graphics for rendering images, based on deforming existing images by designating new positions to some or all of the picture elements (pixels) in 2D plane.
Alternative image synthesis methods, namely image based rendering methods, are currently subjects of development efforts, because of their advantages avoiding the cumbersome step of 3D model construction mentioned above. The present invention relates to that technological field. Typically, in the synthetic rendering approach, a transfer function is created, derived from a combination of correspondences between the model images and designated new image parameters, consequently by employment of warping function. Some methods can be used to generate interpolative images, i.e. inside the limit defined by the angle between the lines connecting the most extreme camera positions with the scene. S. Laveau and O. D. Faugeras, 3-D scene representation as a collection of images in Proceedings of the International Conference on Pattern Recognition, pages 689-691, Jerusalem, Israel, October 1994, describes a method, according to which two reference views, i.e. real images of the scene are used for computation and reprojection of a former, real image into a synthetic image that represents a view from a different aspect. This method suffers from some serious restrictions, for example it allows for extrapolated images to be reprojected, but limited in the viewing angle per projection. The basic concept of "fundamental matrix" is used by these authors to describe a specific 3.times.3 rank-2 matrix that describes the bi-linear relations between points in two reference views of a scene.
E. B. Barrett, P. M. Payton, and G. Gheen, Robust algebraic invariant methods with applications in geometry and imaging, in Proceedings of the SPIE on Remote Sensing, San Diego, Calif., July 1995, demonstrates that the mathematical concept of three linearity constraints improves the accuracy of image based rendering and facilitates extrapolation in the reprojection of images, according to their reasoning trilinear tensors provide best option for mathematical representation of scenes. As described in G. Temple, "Cartesian Tensors" (Methuen & Co, 1960), a tensor is a multi-linear function of direction.
A trilinear tensor as described in A. Shashua, Algebraic functions for recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8):779-789, 1995, (see also U.S. Pat. No. 5,821,943, issued Oct. 13, 1998, in the name of A. Shashua and entitled Apparatus And Method For Creating And Manipulating A 3D Object Based On A 2D Projection Thereof) can be used to represent the scene spaces so as to govern the 3D reprojections. The trilinear tensor is generated as follows. Let P be a point in 3D projective space projecting onto points p, p', p" in three views .PSI., .PSI.', .PSI.", respectively, each represented by two dimensional projective space. The relationship between the 3D and the 2D spaces is represented by the 3.times.4 matrices, [I,0], [A,.upsilon.'], and [B,.upsilon."], such that EQU p=[I,0]P EQU p'.congruent.[A,.upsilon.']P EQU p".congruent.[B,.upsilon."]P
where P=[x,y,1,k], p=(x,y,1).sup.T, p'=(x',y',1).sup.T, and p"=(x",y",1).sup.T (the superscript "T" designating a transposed matrix). The coordinates (x, y), (x',y'), (x",y") for points p, p' and p" are with respect to some arbitrary image origin, such as the geometric center of each image plane. It should be noted that the 3.times.3 matrices A and B are 2D projective transformations (homography matrices) from view .PSI. to view .PSI.' and view .PSI.", respectively, induced by some plane in space (the plane .rho.=0). The vectors .upsilon.' and .upsilon." are known as epipolar points, that is, the projection of 0, the center of projection of the first camera, onto views .PSI.' and .PSI.", respectively. The trilinear tensor is an array of 27 entries as follows EQU a.sub.i.sup.jk =.upsilon.'.sup.j b.sub.i.sup.k -.upsilon.".sup.k a.sub.i.sup.j,i,j,k=1,2,3 (1)
where superscripts denote contravariant indices (representing points in the 2D plane) and subscripts denote covariant indices (representing lines in the 2D plane, like the rows of A). Thus a.sub.i.sup.j is the element of the j-th row and i-th column of A, and .upsilon.'.sup.j is the j-th element of .upsilon.'.