Photorealism has been one of the goals that computer graphics engineers have been striving to achieve ever since the creation of the first computer-generated image. Many modern cinemagraphic, flight simulator and even video game effects all depend on the ability to accurately the model the real world—thus allowing a computer to create images that accurately simulate actual scenes.
One recent effective use of computer graphics to simulate the real world was in the Warner Brothers film “The Perfect Storm” released in July 2000. That film told the story of what happened in late October 1991 in the North Atlantic when a low-pressure system filled with cold air bumped into a hurricane filled with warm air. The resulting “perfect storm” produced waves over 100 feet high. Heading straight into that storm was the Andrea Gale, a 72-foot steel fishing boat on her way home to Gloucester Mass. with a hold full of fish and a six-man crew on board.
When Hollywood set out to make a movie about this exciting and terrifying event, they knew they could do some of the film sequences using full size models of boats in large water tanks, but that it would be impossible to recreate a 100-foot wave. The answer was to use computer graphics to model the ocean's surface. They hired a computer graphics special effects company that reportedly used a crew of nearly one hundred computer graphics engineers and technical directors working for more than fourteen months using hundreds of computer graphics workstations to create the film's special effects shots. It reportedly took several hours of computer time to make a few seconds of special effects for the film. The resulting images were quite impressive and realistic but were very expensive and time-consuming to create. See Robertson, “ILM's Effects Crew Plunged Deep Into State-of-the-Art Technology to Create Digital Water for The Perfect Storm”, Computer Graphics World (July 2000).
Much academic work has also been done in this area in the past. See for example Foster et al, “Practical Animation of Liquids”, Computer Graphics Proceedings, Annual Conference Series pp. 23-30 (SIGGRAPH 2001) and papers cited therein. However, further improvements are possible and desirable.
For example, the computer graphics techniques used in “The Perfect Storm” and other high-end cinematic productions are generally far too processor-intensive to be practical for use in routine lower-end computer graphics environments. As one specific example, it is often highly desirable to create realistic water effects for video games. There have been many fun and successful video games in the past relating to water sports such as jet skiing, boating and fishing. However, the typical home video game system or personal computer has a relatively small and inexpensive processor that is being shared among a number of different tasks and therefore does not have a large amount of processing power available for producing water effects. It would be highly desirable to be able to include in such water sports games, realistic effects showing water disturbances such as waves, wakes, splashes, water droplets and other water surface effects. However, to be practical in this environment, any such effects should be implemented in a computationally-efficient manner so they can be imaged in real time (or near real time) using a relatively low-capacity processor such as those found in home video game systems and personal computers.
The present invention solves this problem by providing efficient techniques for modeling and/or rendering water and other effects (e.g., surface disturbances and motions) in an efficient way that can be performed in real time or near real time using relatively low-capability processing resources such as those found in typical home video game systems and personal computers.
In accordance with one aspect of an illustrative and exemplary embodiment, water surface is modeled using multiple layers. Even though the surface of water in the real world generally has only a single layer, the modeling employed in an illustrative embodiment uses multiple layers with different properties and characteristics. For example, one layer may be used to model the general look and feel of the water or other surface. One or more further layers may be used to model waves propagating across the surface. A further layer may be used to model wakes generated by objects moving on the surface. Yet another layer may be used to model disturbances created by objects that have dropped onto the surface. Additional layers may be used to model wind effects, whirlpool effects, etc.
There are several advantages to the illustrative embodiment's approach of using multiple virtual layers to model a single physical surface. For example, the number of layers being used in a given area of a three-dimensional world can be adjusted dynamically depending upon the amount of processing resources available. Different processes and algorithms can be employed on different layers to give each layer a different look and feel. Thus, each layer may affect the game physics differently, and each layer may also affect the surface disturbance rendering differently.
In accordance with another aspect of an illustrative embodiment, a 3D polygon mesh of a surface such as water that is subject to disturbances is generated based on camera location and direction in the 3D world. For example, the polygon mesh may be generated depending on a point location that is interpolated between the camera direction vector and the surface being imaged. As the camera direction becomes more aligned with the surface, the location point will, in an illustrative embodiment, tend toward the intersection of the lower camera frustum vector with the water surface. As the camera direction becomes more perpendicular with respect to the water's surface, the location point tends toward the intersection of the camera direction vector with the water's surface. This technique thus tends to generate smaller polygons near the selected location and larger polygons further from the selected location—such that the parts of the polygon mesh near the selected location have a higher resolution than the parts of the mesh that are further from the selected location. This techniques provides an inherent level of detail feature, resulting in a more uniform polygon size on the screen—minimizing the amount of processing time spent generating small polygons.
The illustrative technique thus generates fewer polygons to cover the same area as compared to a typical uniform grid and also reduces the level of detail feature as the camera becomes more perpendicular to the water's surface. This way, the illustrative technique does not generate skewed polygons when the camera looks directly down onto the surface, but instead generates a perfectly uniformly sized polygon grid in the illustrative embodiment. The illustrative technique also scales the polygon size dynamically based on how far the camera is to the selected location in order to ensure that the polygons on the screen stay roughly the same size without wasting extensive processing resources on rendering very small polygons that will not substantially contribute to the overall image.
In accordance with yet another aspect of an illustrative embodiment, water droplets hitting a window or other see-through surface may be simulated. In this particular illustrative example, an indirect texturing feature is used. In more detail, an indirect texture map is created defining a delta specifying how a water droplet distorts the image to be seen through a transparent or translucent surface such as a window. Each texel of this indirect texture map in the illustrative embodiment is used as an offset for texture coordinate lookup into another texture defining the undistorted version of the area of the screen to which the water droplet will be rendered. In the exemplary and illustrative embodiment, the indirect texture map is comprised of intensity alpha values where one channel specifies the U offset and the other specifies the V offset.
In the illustrative embodiment, the area of the screen to which the water droplet will be rendered is first rendered and then placed (e.g., copied out) into a base texture map. The base texture map is then rendered using the indirect texture map to distort the texture coordinates at each texel. The result is an image that is distorted by the water drop indirect map. This technique is not limited to water droplets and window effects, but can be more generally applied to produce other special effects (e.g., distortion by ice, frost or any other effect as seen through any type of a transparent or translucent object or other imaging surface).
Further exemplary non-limiting advantages provided by an illustrative non-limiting embodiment include:                realistic wave physics in real time or near real time;        complex wave geometry in real time or near real time;        Possible for user or game to adjust wave height from calm to tsunami;        environment-mapped reflection-mapped wave scape to reflect the surrounding land, trees, clouds, buildings, water craft, and other 3D objects;        reflection morphs with changing waves and blends into the surface that lies beneath the transparent water;        waves deform from the pressure of water craft (this can, for example, be used to agitate the surface of the water in front of your opponents, creating a less stable surface);        adaptive reduction of wave complexity (e.g., adaptive elimination or reduction of large, rolling waves across broad surfaces) to ensure appropriate frame rate even when additional demands are placed on processing resources; and        realistic water droplet windowing effects.        