1. Technical Field
The present disclosure relates to the generation of graphic images by a computer, and in particular to the determination of the values of the pixels of such images.
2. Description of the Related Art
The present disclosure particularly applies to personal computers, game consoles, as well as to portable electronic objects such as mobile telephones and personal digital assistants (PDAs).
The generation of graphic images comprises processing operations which commonly involve mathematically modeling the interaction of light with a three-dimensional scene seen from a given observation point. These so-called “rendering” processing operations generate a two-dimensional image of the scene seen from the observation point. To obtain a quality of image equivalent to that of a photograph of a scene of the real world, and to meet strict real-time requirements, these processing operations generally use a specialized processing unit also referred to as a “graphic card or processor”. Some graphic cards have an architecture enabling several operations to be performed in parallel.
The rendering processing operations are shared between the main processor (CPU) and the graphic processing unit of the computer. Classically, the main processor performs high level operations such as determining the position and the movement of objects in a scene and generates, from high level operations, rendering commands and graphic data giving a general description of a scene. From such commands and graphic data, the graphic processing unit produces one or more images that are stored in a frame memory containing images that are displayed on a screen.
A graphic processing unit particularly performs successive so-called “vertex”, “rasterization”, and then “fragment” processing operations. The vertex processing converts coordinates of object points in a two- or three-dimensional space, considered as vertexes of contours of image zones with polygonal contour (generally triangular), into coordinates of points in an image of the space seen by a user. The rasterization processing converts each image zone into sets of pixels called “fragments” comprising the coordinates in the image of all the pixels of the image zone and of the color and texture attributes associated with the vertexes of the image zone. The fragment processing particularly comprises determining characteristics or attributes, such as the color and the texture of each of the pixels of the fragment. The fragment processing supplies colors of pixels that are stored in a frame memory of a display device. Certain graphic processing units can be programmed to the point where programs implementing complex algorithms for illuminating and shading objects of a scene can be executed.
The fragment processing proves to be the most costly in computing time and power. It must indeed be executed on each image zone (or fragment) of each image to be generated. Now, an image that can be displayed at a frame frequency in the order of 30 frames per second can need several tens of millions of fragments to be processed per second. Furthermore, the definition, and therefore the number of pixels of the screens of portable devices such as mobile telephones tends to increase. Currently, these screens are close to the VGA format (640×480 pixels). In this case, the number of pixels to be calculated for each image is thus in the order of 300,000 pixels.
However, these portable devices do not have graphic processors as efficient as those of PCs. It is therefore desirable to reduce the computing time needed to process the fragments, while deteriorating the quality of the images as little as possible.