There is increasing trend in the use of three dimensional (3D) viewing in various industries such as in entertainment, mechanical engineering designs view, online shopping sites, and offline product advertisement panels. There are many web-based shopping markets, websites or store fronts which show images or in some case a short video of objects or products. The images are static and in some cases only enlarged or zoomed to get a clearer picture. In some other cases video of products are captured, but this makes the loading, and ultimately viewing slow, and further user get to see whatever is captured mostly either by streaming or through media player in two dimensional projections or partly in three dimensions. The images and written information displayed provides limited information about the desired object. Limited information here means information that is written and displayed related to object, which is available for view to the end user. This is a passive way of information transfer. In conventional systems, web based portals or sites, and online shopping portals, the user cannot interact with the product as possible when user or customer physically visits a shop to a great extent, for example, viewing the product in all possible angles, checking functionalities, asking any type of desired queries about the product, interacting with product to see its interior or exterior just like real scenario. This is active way of information transfer.
U.S. Pat. No. 7,680,694B2, U.S. Pat. No. 8,069,095B2, U.S. Pat. No. 8,326,704 B2, US20130066751A1, US20120036040A1, US20100185514A1, US20070179867A1 and US20020002511A1, discusses about solution for 3D view, and some form of interactions related of online shopping, shopping location, and stores. This is limited to displaying the virtual shopping location on a user computer by streaming a 3D interactive simulation view via a web browser. However, this doesn't provide for generating a 3D model which has real object properties in true sense capable of user-controlled simulation and interactions not restricted or limited to pre-set or pre-determined interactions. Conventional systems, methods and techniques lack in generating 3D-model carrying properties of real objects such as appearance, shape, dimensions, texture, fitting of internal parts, mirror effect, object surface properties of touch, smoothness, light properties and other nature, characteristics, and state of real object, where performing user-controlled realistic interactions such as viewing rotation in 360 degree in all planes, non-restrictive intrusive interactions, time-bound changes based interaction and real environment mapping based interactions as per characteristics, state and nature of the said object are lacking. U.S. Pat. No. 7,680,694 B2, U.S. Pat. No. 8,326,704 B2, WO 01/11511 A1 also discusses about a concierge or an animated figure or avatars or sales assistant, capable of offering information about products or graphics to customers, remembering customer buying behaviour, product choices, offering tips and promotions offer. These types of interactions are limited to pre-defined set of offers, information about products. The input query is structured and generally matched with database to find and retrieve answers. However there still exists gap in bringing out the real-time intelligent human-like interaction between the said animated figure and real human user. This is no mention of facial expressions, hand movements and precision which are prime criteria to receive a response from the animated figure or concierge which is human-like and as per the query of the real human user. For active communication, a natural interface such as understanding of language such as English is necessary. Such technology to decipher meaning of language during text chat by a virtual assistant or intelligent system and provide user query specific response is costly endeavour and still a problem to be solved.
A JP patent with Application Number: 2000129043 (publication Number 2001312633) discusses about a system, which simply show texture information, and touch sense information in form of write-up in addition to still picture information or a photographic image, an explanatory sentence, video, and only three-dimensional information which user have to read. This and other patents U.S. Pat. No. 6,070,149A, WO0169364A3, WO 02/48967 A1, U.S. Pat. No. 5,737,533A, U.S. Pat. No. 7,720,276 B1, U.S. Pat. No. 7,353,188 B2, U.S. Pat. No. 6,912,293 B1, US20090315916A1, US20050253840A1 discusses about 3D viewing and simulation, and virtual or online shopping experience. However lack in one or more of the following points and technologies given below.
Further, most existing technology of 3D simulation for providing digital object viewing and interaction experience, in addition to above also lack one or more of the following:
1. The existing simulated 3D-models are hollow models meaning such models doesn't allows intrusive interactions such as to see exploded view of the parts of a simulated 3D-model of an object in real-time, or open the parts of the 3D-model of object one by one as a person could have done in real scenario. For example, in conventional virtual reality set-up, a user cannot open the compressor of a refrigerator from a virtual 3D-model of refrigerator, or open or perform interactions with sub-part of the simulated 3D-model such as battery and other internal parts removed from a 3D-model of a mobile for interactions and realistic viewing, rotate tyres of car, move steering wheel to judge the movement and power steering, or examine the internal parts or interior built of a simulated 3D-model of mobile in real time. In some conventional cases, limited options are provided, on click of which an internal part of an object is visible in photographic or panoramic view, but such cannot do further analysis of internal parts beyond the provided options. Another example is 3D-view of a bottle filled with oil or any liquid, where only a 3d-simulated view can be displayed in conventional systems, but a user cannot open the cork of the bottle, or pour the liquid from the bottle in an interactive manner as per his desire which is possible in real scenario. In other words user-controlled interaction is not feasible as per user choice.2. They don't allow realistic extrusive interaction such as rotating 3D-model of object/s in 360 degree in different planes with ability of interaction from any projected angle. Mostly only 360 degree rotation in one plane is allowed in existing technologies. Further, current 3D-simulation technology lacks to give a realistic 3D-simulation effect or 3D visualization effect, lighting effect for light-emitting parts of 3D-model of object, interacting with 3D-models having electronic display parts for understanding electronic display functioning, sound effects, of object such that creating illusion of real objects is not very precise in virtual views. In real object input is given at some part such as sound button on TV of real object to perform desired operation such as producing sound in speakers. Similarly input in 3d object can be provided to perform operation of the part of 3d object emulating real scenario.3. Another lack of originality and closeness to real-set up is operating pressure, judging sense of taste, sense of touch. For example, a user opening a movable-part of multi-part object such as refrigerator, where the user holds the handle, and applies pressure to open the refrigerator door. Existing virtual 3D-simulated models of object and technology cannot judge the smoothness or softness of the handle and the operating pressure or force required to open the refrigerator door.4. Monitoring or visualizing time-bound changes observed on using or operating an object is not possible. User cannot check product or object behavior after a desired duration. For example checking the heating of iron, or cooling in refrigerators, or cooling generated by air conditioners in a room. Further, user cannot hear the sound when a refrigerator door is opened from a simulated 3D-model of object which mimics the real sound produced when opening the door of a real refrigerator in real setup. Further change in sound after certain intervals of time cannot be heard or monitored to experience the product performance, or to compare it with other product.5. Further in real scenario a user can switch on a laptop, computer, iPad, mobile or any computing device, and check the start-up time, speed of loading of the operating system, and play music etc. Such interactions are lacking in real time for various virtual 3D-models and choice of user is limited to observing only the outer looks of the object such as laptop.6. Real environment mapping based interactions are interactions where user environment, that is the place or location in the vicinity of user, is captured through a camera, mapped and simulated in real-time such that a realistic 3D-model or virtual object displayed on electronic screen can be seen interacting with the mapped and simulated environment. Such real-time interactions including mirror effect are lacking in current technologies.7. The existing technology doesn't allow dynamic customization of texturing pattern of 3D-model during loading of the 3D-model.
Such real-time and enhanced interactions are lacking in current virtual reality related technologies. The above constraints in current available technology/technologies makes very difficult for human user to interact with things virtually in a way that he/she can interact in real world, and hence there is need for a technology that enhances digital object viewing and interaction experience, and bridges the gap between real and virtual world in true sense.
The object of the invention is to provide for user-controlled realistic 3D simulation for enhanced object viewing and interaction experience capable of displaying real products virtually in interactive and realistic 3D-model.