Recent advances in computing power and related technology have fostered the development of a new generation of powerful software applications. Gaming applications, communications applications, and multimedia applications have all benefited from increased processing power and clocking speeds. Indeed, the field of computer graphics, leveraged by a myriad of applications, has particularly benefited by the improved processing capability found in even the most rudimentary computing devices. Examples abound, from gaming applications which once utilized simple stick figures are now capable of rendering near life-like images of people, animals, and the like. In the realm of media, movies that once relied on clumsy animation for special effects now benefit from the ability to render life-like computer images.
This ability to create and render life-like characters has added a level of realism and personality to gaming applications, modeling applications, multimedia applications, and the like. While the increased processing power/speed has improved the quality of the rendered images used in such applications, a number of problems remain in generating life-like characters in near real-time to support interactive applications such as, for example, gaming applications, multimedia applications where the user dynamically controls camera angles and proximity to subject of the media content. One major problem that remains in generating and rendering life-like characters in near real-time is the inability of current computer graphics technology to dynamically render realistic hair (used interchangeably herein with fur, scales, etc.).
One of the distinguishing characteristics of mammals is that they are covered in hair (used interchangeable herein to describe hair, fur, scales, etc.). Thus, in order to render a life-like representation of such a mammal, the application modeling the creature must attempt to cover at least select surfaces of the character in hair. Prior art approaches to modeling such surface characteristics relied on ray-tracing, Introduced by Kajiya and Kay in a presentation entitled Rendering fur with three dimensional textures, in the Proceedings of SIGGRAPH 1989, pages 271–280. In the Kajiya et al. ray-traced approach, a model with explicit geometric detail is represented as a volume of textures. While fine geometric modeling of surface features using ray tracing can provide convincing surface features, it is computationally expensive and is not, therefore, conducive to an application that supports interactive features.
Thus, a method and apparatus for modeling and rendering surface detail is presented, unencumbered by the deficiencies and limitations commonly associated with the prior art.