Three-dimensional (3D) computer imgaging (or graphics) is a relatively new technical area. Despite many challenges, this (3D) computer imgaging (or graphics) has become increasingly more popular. As known in the art, 3D images or graphics can be generated with the aid of a digital computer (or computer) and specialized 3D software. Generally speaking, 3D computer imgaging (or graphics) may also refer to the process of creating graphics, or the field of study of 3D computer graphic techniques and its related technology. 3D computer graphics are different from two-dimensional (2D) computer graphics in that a three-dimensional (3D) representation of geometric data is typically stored in the computer for the purposes of performing calculations and displaying (or rendering) 2D images. However, 3D computer graphics may rely on many of the same algorithms used by 2D computer graphics. In general, the art of 3D modeling, which prepares geometric data for 3D computer graphics is akin to sculpting, while the art of 2D graphics is more analogous to painting. However, those skilled in the art will readily appreciate that 3D computer graphics may rely on many of the same algorithms used by 2D computer graphics
In computer graphics software, this distinction is occasionally blurred; some 2D applications use 3D techniques to achieve certain effects such as lighting, while some primarily 3D applications make use of 2D visual techniques. 2D graphics can be considered to be a subset of 3D graphics.
OpenGL (Open Graphics Library) and Direct3D are among popular Application Program Interfaces (API's) for the generation of real-time imagery. In this context, Real-time generally means that image generation occurs in “real time”, or “on the fly.” Many modern graphics cards provide some degree of hardware “acceleration” based on these APIs, frequently enabling the display of complex 3D graphics in real-time. However, it is not necessary to use a graphics card to actually create 3D imagery.
For simplification, the process of creating 3D computer graphics can be sequentially divided into three basic phases: a modeling phase, a scene layout setup phase, and a rendering phase.
The modeling phase or (stage) can be described as shaping individual objects that are later used in a 3D scene. A number of modeling techniques are known to those skilled in the art (e.g., constructive solid geometry, NURBS modeling, polygonal modeling, subdivision surfaces, implicit surfaces). It should be be noted that a modeling process can also include editing object surface or material properties (e.g., color, luminosity, diffuse and specular shading components—more commonly called roughness and shininess, reflection characteristics, transparency or opacity, or index of refraction), adding textures, bump-maps and other features. 3D Modeling can also include various activities related to preparing a 3D model for animation. However, in a complex character model, this will become a stage of its own, known as rigging.
3D Objects can be fitted with a skeleton, a central framework of an object with the capability of affecting the shape or movements of that object. This aids in the process of animation, in that the movement of the skeleton will automatically affect the corresponding portions of a 3D model. At the rigging stage, the model can also be given specific controls to make animation easier and more intuitive, such as facial expression controls and mouth shapes (phonemes) for lipsyncing. 3D Modeling can be performed by means of a dedicated program (e.g., Lightwave Modeler, Rhinoceros 3D, Moray), an application component (Shaper, Lofter in 3D Studio) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between phases. As such, modelling can be just a part of a scene creation process (e.g., Caligari trueSpace). TrueSpace is a 3D computer graphics and animation software developed by Caligari Corporation, originally created for the Amiga Computer, and later for the Windows platform. One of the most distinctive features of trueSpace is its interface, using mainly 3D widgets for most common editing operations. The software can be used for modelling, animating and rendering (using the Lightworks rendering engine), and basic post-processing.
As a second basic phase of 3D computer graphics processing, scene setup can involve arranging virtual objects, lights, cameras and other entities on a 3D scene (or scene) which will is later used to produce a still image or an animation. If used for animation, this phase usually makes use of a technique called “keyframing”, which facilitates creation of complicated movement in the scene. With the aid of keyframing, instead of having to fix an object's position, rotation, or scaling for each frame in an animation, one needs only to set up some key frames between which states in every frame are interpolated. Lighting can be an important aspect of scene setup.
As is the case in real-world scene arrangement, lighting can be a significant contributing factor to the resulting aesthetic and visual quality of the finished work. The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations (“primitives”) such as spheres, cones etc, to so-called meshes, which can be nets of interconnected triangles. Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to render using scanline rendering. Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene.
Rendering can be considered to be the final phase of creating the actual 2D image or animation from a prepared 3D scene. This phase is comparable to taking a photo or filming the scene after the setup is finished in real life. Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second.
Animations for non-interactive media, such as video and film, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to an hour or more for complex scenes. Rendered frames are stored on a hard disk, then possibly transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement. This can be done by using Ray tracing and GPU (Graphics Processing Unit) based real-time polygonal rendering. The goals are different. A ray-traced image can take seconds or minutes to render as photo-realism is the goal. This is the basic method employed in films, digital media, artistic works, etc. In contrast, In real time rendering, the goal is often to show as much information as possible as the eye can process in a 30th of a second. The goal here is primarily speed and not photo-realism. As such, here exploitations can be made in the way the eye ‘perceives’ the world. Thus, the final image presented is not necessarily that of the real-world, but one which the eye can closely associate to the world. This is the basic method employed in games, interactive worlds. A Graphics Processing Unit or GPU (also occasionally called Visual Processing Unit or VPU) is a dedicated graphics rendering device for a personal computer or game console. Modern GPUs are very efficient at manipulating and displaying computer graphics, and their highly-parallel structure makes them more effective than typical CPUs for a range of complex 3D related algorithms.