Currently, real/physical products or objects are displayed digitally with the help of images, photographs or videos for representing a real object in various implementations. A 3D computer graphics model is a better option to represent a real product, however existing 3D computer graphics model rendered in real-time lack in realism, and look unreal or artificial due to artificial looking texture on the 3D computer graphics model, hereinafter referred to as 3D model. Even 3D model generated in non-real time rendering such as in making animation movies also lack realism or real texture. Efforts have been made to use 3D model to represent a real car in some implementation, where electronic systems display 3D models of car with pre-defined and/or very limited interactions possibilities available for users with the 3D models of car. However the 3D models in such systems still looks cartoonish or artificial due to use of artificial colour or images as texture. For example, in case of 3D model of a real car textured using conventional texturing methods or techniques, interiors, seats, steering, and other internal and/or external parts looks unreal.
Currently texture of 3D models of real products such as car, bikes, home appliances or objects of complex structure or geometry are made by texture artist by synthesizing or creating artificial texture using different parameters such as colours, shine or glossiness, reflectivity, bumps etc with the objective to provide look and feel of realistic texture using different applications or software. However, it is observed that this kind of texture mapping gives an artificial look, and the textured 3D models of real 3D objects do not look real. It is understood that real photographs cannot be replaced by artificial texture because all surfaces of a real object do not carry a single uniform texture pattern, which cannot be created artificially, and that texturing will be an artistic job or not exact like a photograph. Further, to display reality-based textures during blinking of real light from light emitting device such as head light of an automotive vehicle, is challenging using conventional texturing techniques, systems or methods. Instead of creating artificial texture, if real photographs and/or video of external and internal parts of real 3D object, like car are used as a texture for the 3D-model, then the 3D model of car will look extremely real both from exterior and interior view. However, using real photographs for texture mapping of external and internal surfaces of external and internal parts of 3D model pose a lot of challenges discussed below.
Further in some implementations, one or more patches are mapped using photographs while other areas in the 3D model are painted by texture artist using artificial texture. Currently texture mapping in computer graphics related to texturing of 3D model are limited to mostly texturing of only exterior or outside region of 3D-models using primarily artificial texture such as images other than photographs, colours using a texture map. Unwrapping of the 3D model, and then providing a functional UV layout before applying texture map is known. However, retaining precise details, texturing of hidden regions due to fitting of one part with another part (discussed in FIG. 8), and texturing of internal parts to texture 3D model using numerous photographs, say in hundreds or thousands real photographs and/or video is a challenge and problem unaddressed in art. In some implementations, where photographs are used, only a few photographs (generally a few in number, usually ranging from 2-20) of full body from different angles are used for texturing of 3D models, and that too usually for external surfaces limited to planar surfaces or surfaces that are not of complex geometry. However, such texturing using conventional techniques, methods and systems cannot provide or retain minute details and vivid appearance in the 3D model. One of the reasons for lacking details is photograph capturing manner, and that individual parts in real object are either not or rarely segregated/dismantled for capturing detailed or close-up photographs from each face of the segregated part. Further adjusting or calibration of different photographs, and alignment on UV layouts of external and internal surfaces maintaining visual consistency and removing any distortion or visible artifacts is a difficult problem. Photograph based texturing of 3D model is a complex problem in real time rendering (the outcome of user controlled interaction implementation), as image data is heavy compared to using colours as texture. Although non-real time rendering can handle very heavy texture, still limited attempts have been made to use real photographs as texture. However, such attempts could not show real look and feel as of real object, or in other words the results obtained looked cartoonish. For example, in an implementation as discussed in a paper titled “Texture Montage: Seamless Texturing of Arbitrary Surfaces From Multiple Images”, published in ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2005, Volume 24 Issue 3, July 2005, Pages 1148-1155, discusses about automatically partitioning of a single body 3D model mesh and the images, where mapping is driven solely by the choice of feature correspondences, while a surface texture in-painting technique is used to fill in the remaining charts of the surface with no corresponding texture patches. Further in some implementation as discussed in a publication titled “Texturing internal surfaces from a few cross sections,” Comput. Graph. Forum, vol. 26, pp. 637-644, 2007, is limited to be used for carving objects or texturing internal surfaces in cutting simulation by taking cross-section photographs of real objects. Additionally, the technique is limited to synthesis of colour by texture morphing, which is an approximation and not exact representation of factual details of photographs due to difficulty in texturing of internal surfaces using photographs directly. Further, difficulty increases when the internal surface is also multi-faceted and of complex geometry such as automobile vehicle. Again further, such 3D computer model cannot represent internal parts which are separable, and texturing of further exterior and inner surface of internal parts becomes very difficult and a challenging problem. For the same reason, current textured 3D model of complex geometry such as cars, bikes, complex machineries look cartoonish.
In some implementations costly 3D scanners are used to create 3D models, followed by automated texture mapping usually using images or silhouette images. In such scanning systems, generated 3D model is a solid body or a shell type single body of exterior of the real object. For example, if a complex 3D object with complicated geometry such as car is subjected to scanning, the generated 3D-model will be a single body depicting outer surface or region of car with very high polygonal faces. Here, sub-parts such as car doors, window, and bonnet cannot be separated in the generated 3D-model. Additionally and importantly, scanning interior of car, and further texturing of interior will be very difficult with known systems, and a costly affair.
Now, for the purpose of understanding this invention, a 3D-model is a 3D computer graphics model representing a real or physical object, where the 3D computer graphics model representing a real 3D object is used in user-controlled interactions. The 3D-model is either a single body 3D model or multi-part 3D model having external and/or internal parts to form a single 3D-model. As used in this description and in the appended claims, the user-controlled interactions are interactions performed by a user in real-time with a 3D-model representing a real object, where on providing an input by the user, a corresponding response is seen in the 3D computer model, and where the response is generated by real-time rendering of corresponding view of 3D model as output. For example, the response can be a movement in the entire 3D model, or a part of the 3D model showing movement to a different position from initial state or position, or any texture change will result in change in view of the 3D model. The user-controlled interactions are performed as per user choice, or in other words controlled by user. The user-controlled interactions also includes the user-controlled realistic interactions, an advance form of user-controlled interactions, that are discussed in details in U.S. patent application Ser. No. 13/946,364, Patent Cooperation Treaty (PCT) Application No. PCT/IN2013/000448, Indian Patent application No. 2253/DEL/2012, all now pending, filed by the same applicants as of this application. Traditionally, texturing is carried out using colours and/or images. The images when used for texturing are either artificially created, or created to resemble texture close to photographs.
Therefore, there exists a challenge to texture the 3D models from external or internal surfaces, or on internal parts, using a plurality of real photographs and/or video for providing extremely realistic and detailed view on and/or within the 3D-model. A further challenge exists to texture the 3D models using photographs/video such that the 3D models are able to support user-controlled interactions or support real-time rendering.
Further, it is also a challenge to obtain a 3D model with texture same and as realistic as of captured photographs of physical object without much increase in the file size of the texture data of 3D-model. It is due to the size increase as one of the problems that most texture mapping processes use only a patch from original photographs of physical objects for texture mapping in one plane, colouring or painting other left-over un-textured portions. Now, to texture a 3D model of very complicated 3D structure having complex geometry such as automobiles, electronic objects, and machineries etc for external and internal surfaces of external and internal parts using real photographs for retaining factual details is a real challenge and a problem to be solved. For example in case of 3D model such as mobile, the multi-part 3D-model of mobile includes parts viewed from outside such as display, keys, body, battery cover etc, and internal parts such as battery, interior of mobile, inner side of battery cover, SIM slots etc. It is relatively easy to texture on the outer body of 3D-model as a whole, but difficulty increases to map texture on functional parts such as keys in mobile as keys, when the functional parts are movable or can be pressed during a user-controlled interaction. The difficulty level further increases if texture is to be mapped on internal parts such as integrated SIM slot positioned beneath mobile battery, which in turn is positioned beneath battery cover, and the inner side of the battery cover in one example of 3D-model of mobile. The application of photographic images or video on UV layouts of the functional and internal parts of 3D model for texture mapping, and also simultaneously retaining the functionality of all disintegrated parts is a challenge and a problem unaddressed in the art. Additionally, during user-controlled realistic interactions as mentioned in patent application Ser. No. 13/946,364 filed on Jul. 19, 2013, now pending, by the same applicants as of this application, view of the 3D-model changes as per interactions performed by user choice. Thus, a further need arises to integrate texturing using photographs and/or video with dynamic texture changing ability on a same part or different sub-parts depending on the user-controlled interactions, where the texture comes from real photographs and/or video of real objects in cost-effective, and simplified manner, and for increased realism in view of 3D models during user-controlled realistic interactions.
The present invention is also directed to overcoming or at least reducing one or more of the challenges, difficulties and problems set forth above.