The present application relates to computer graphics rendering systems and methods, and particularly to handling of texture data used by rendering accelerators for 3D graphics.
Background: 3D Computer Graphics
One of the driving features in the performance of most single-user computers is computer graphics. This is particularly important in computer games and workstations, but is generally very important across the personal computer market.
For some years the most critical area of graphics development has been in three-dimensional (xe2x80x9c3Dxe2x80x9d) graphics. The peculiar demands of 3D graphics are driven by the need to present a realistic view, on a computer monitor, of a three-dimensional scene. The pattern written onto the two-dimensional screen must therefore be derived from the three-dimensional geometries in such a way that the user can easily xe2x80x9cseexe2x80x9d the three-dimensional scene (as if the screen were merely a window into a real three-dimensional scene). This requires extensive computation to obtain the correct image for display, taking account of surface textures, lighting, shadowing, and other characteristics.
The starting point (for the aspects of computer graphics considered in the present application) is a three-dimensional scene, with specified viewpoint and lighting (etc.). The elements of a 3D scene are normally defined by sets of polygons (typically triangles), each having attributes such as color, reflectivity, and spatial location. (For example, a walking human, at a given instant, might be translated into a few hundred triangles which map out the surface of the human""s body.) Textures are xe2x80x9cappliedxe2x80x9d onto the polygons, to provide detail in the scene. (For example, a flat carpeted floor will look far more realistic if a simple repeating texture pattern is applied onto it.) Designers use specialized modelling software tools, such as 3D Studio, to build textured polygonal models.
The 3D graphics pipeline consists of two major stages, or subsystems, referred to as geometry and rendering. The geometry stage is responsible for managing all polygon activities and for converting three-dimensional spatial data into a two-dimensional representation of the viewed scene, with properly-transformed polygons. The polygons in the three-dimensional scene, with their applied textures, must then be transformed to obtain their correct appearance from the viewpoint of the moment; this transformation requires calculation of lighting (and apparent brightness), foreshortening, obstruction, etc.
However, even after these transformations and extensive calculations have been done, there is still a large amount of data manipulation to be done: the correct values for EACH PIXEL of the transformed polygons must be derived from the two-dimensional representation. (This requires not only interpolation of pixel values within a polygon, but also correct application of properly oriented texture maps.) The rendering stage is responsible for these activities: it xe2x80x9crendersxe2x80x9d the two-dimensional data from the geometry stage to produce correct values for all pixels of each frame of the image sequence.
The most challenging 3D graphics applications are dynamic rather than static. In addition to changing objects in the scene, many applications also seek to convey an illusion of movement by changing the scene in response to the user""s input. Whenever a change in the orientation or position of the camera is desired, every object in a scene must be recalculated relative to the new view. As can be imagined, a fast-paced game needing to maintain a high frame rate will require many calculations and many memory accesses.
FIG. 2 shows a high-level overview of the processes performed in the overall 3D graphics pipeline. However, this is a very general overview, which ignores the crucial issues of what hardware performs which operations.
Texturing
There are different ways to add complexity to a 3D scene. Creating more and more detailed models, consisting of a greater number of polygons, is one way to add visual interest to a scene. However, adding polygons necessitates paying the price of having to manipulate more geometry. 3D systems have what is known as a xe2x80x9cpolygon budget,xe2x80x9d an approximate number of polygons that can be manipulated without unacceptable performance degradation. In general, fewer polygons yield higher frame rates.
The visual appeal of computer graphics rendering is greatly enhanced by the use of xe2x80x9ctextures.xe2x80x9d A texture is a two-dimensional image which is mapped into the data to be rendered. Textures provide a very efficient way to generate the level of minor surface detail which makes synthetic images realistic, without requiring transfer of immense amounts of data. Texture patterns provide realistic detail at the sub-polygon level, so the higher-level tasks of polygon-processing are not overloaded. See Foley et al., Computer Graphics: Principles and Practice (2.ed. 1990, corr. 1995), especially at pages 741-744; Paul S. Heckbert, xe2x80x9cFundamentals of Texture Mapping and Image Warping,xe2x80x9d Thesis submitted to Dept. of EE and Computer Science, University of California, Berkeley, Jun. 17, 1994; Heckbert, xe2x80x9cSurvey of Computer Graphics,xe2x80x9d IEEE Computer Graphics, Nov. 1986, pp. 56; all of which are hereby incorporated by reference. Game programmers have also found that texture mapping is generally a very efficient way to achieve very dynamic images without requiring a hugely increased memory bandwidth for data handling.
A typical graphics system reads data from a texture map, processes it, and writes color data to display memory. The processing may include mipmap filtering which requires access to several maps. The texture map need not be limited to colors, but can hold other information that can be applied to a surface to affect its appearance; this could include height perturbation to give the effect of roughness. The individual elements of a texture map are called xe2x80x9ctexels.xe2x80x9d
Awkward side-effects of texture mapping occur unless the renderer can apply texture maps with correct perspective. Perspective-corrected texture mapping involves an algorithm that translates xe2x80x9ctexelsxe2x80x9d (pixels from the bitmap texture image) into display pixels in accordance with the spatial orientation of the surface. Since the surfaces are transformed (by the host or geometry engine) to produce a 2D view, the textures will need to be similarly transformed by a linear transform (normally projective or xe2x80x9caffinexe2x80x9d). (In conventional terminology, the coordinates of the object surface, i.e. the primitive being rendered, are referred to as an (s,t) coordinate space, and the map of the stored texture is referred to a (u,v) coordinate space.) The transformation in the resulting mapping means that a horizontal line in the (x,y) display space is very likely to correspond to a slanted line in the (u,v) space of the texture map, and hence many additional reads will occur, due to the texturing operation, as rendering walks along a horizontal line of pixels.
Background: Data and Memory Management
Due to the extremely high data rates required at the end of the rendering pipeline, many features of computer architecture take on new complexities in the context of computer graphics (and especially in the area of texture management).
Caching
In defining computer architectures, one of the basic trade-offs is memory speed versus cost: faster memories cost more. SRAMs are much more expensive (per bit) than DRAMs, and DRAMs are much more expensive (per bit) than disk memory. The price of all of these has been steadily decreasing over time, but this relationship has held true for many years. Thus computer architectures usually include multiple levels of memory: the smallest and fastest memory is most closely coupled to the processor, and one or more layers successively larger, slower, and cheaper.
The fastest memory is that which is completely integrated with the processor. An essential part of microprocessor architecture is various read-write registers, which are intimately intertwined with the hardware logic circuits of the microprocessor. Some of these registers have dedicated functions, but others may be provided for xe2x80x9cscratchpadxe2x80x9d space usable by software. These registers are often overlooked in the memory hierarchy; but many of them can be directly accessed by software, and they may therefore be thought of as the innermost circle of the memory hierarchy. (A variant on this is a multi-chip module which includes additional memory in the same package with a microprocessor chip. An example of this is the DS5000 module from Dallas Semiconductor, which includes a dedicated local bus, with a battery-backed SRAM, in the same sealed package as a microcontroller.)
When the central processing unit (CPU) executes software, it will often have to read or write to an arbitrary (unpredictable) address. This address will correspond to some specific portion of some specific memory chip in the main memory. (In a virtual memory system, an arbitrary address may correspond to a physical location which is in main memory or mass storage (e.g. disk). In such systems, address translation performs fetches from mass storage if needed, transparently to the CPU. Virtual memory management, like cache management, is an important architectural design choice, and xe2x80x9cmemory managementxe2x80x9d logic often performs functions related to virtual memory management as well as to cache management. However, the needs and impact of virtual memory operation are largely irrelevant to the disclosed innovations, and will be largely ignored in the present application.) However, main memory typically has a minimum access time which is several times as long as the basic CPU clock cycle. This causes xe2x80x9cwait states,xe2x80x9d which are undesirable. The net effective speed of a large DRAM memory can be increased by using bank organization and/or page mode accesses; but such features can still provide only a limited speed improvement, and net effective speed of a large DRAM memory (as seen by the processor) will still typically be much slower than that of the processor. (For example, a 500 MHz processor will have a clock period of about 2 nsec. However, low-priced DRAM memories typically have access times of 50 ns or more. Thus, when a 2 ns processor attempts to read 50 ns DRAM memory, the processor must wait for several of its cycles until the memory returns data. Such xe2x80x9cwait statesxe2x80x9d degrade the net performance of the processor.) Thus, further speed improvement is still needed, and other techniques must be used to achieve this.
The addresses actually used by almost any software program will be found to include a high concentration of accesses within a few neighborhoods of address space. Thus, it has long been recognized that computer performance, for a given price, can be improved by using a small amount of fast (expensive) memory to provide temporary storage for recently-accessed addresses. Whenever the same address is accessed again, it can be read from the fast memory, instead of the slower main memory. Such memory is called cache memory. One or more layers of cache memory may be used.
Usually cache memory includes one or more fast SRAM chips, which are closely coupled to the CPU by a high-speed bus. A variation of this, used in the Intel x86 processes, is an on-chip cache memory which is integrated on the same chip with a microprocessor. Such on-chip cache memory is often used in combination with a larger external cache. Thus, this is one of the first examples, in PC architectures, of multi-level cache hierarchy. Multi-level cache architectures have been widely discussed in the last decade, and have been used in a number of high-speed computers.
The main memory usually consists of volatile semiconductor random access memory (typically DRAM). This will normally be organized with various architectural tricks to hasten average access time, but only a limited amount of improvement can be readily achieved by such methods. (A small amount of nonvolatile memory, e.g. ROM, EPROM, EEPROM, or flash EPROM, will also be used to store initialization routines. Some of these technologies have a cost per bit which is nearly as low as DRAM, but these technologies tend to have access times which are slower than DRAM. Moreover, since these are read-only or read-mostly memories, they are not suited for general-purpose random-access memory.)
Behind the main memory, there will be one or more layers of nonvolatile mass storage. Nearly any computer will have a magnetic disk drive, and may also have optical read-only disk drive (CDROM), magnetooptic memory, magnetic tape, etc.
Some further background discussion of cache management can be found in Przybylski, Cache and Memory Hierarchy Design (1990); Handy, The Cache Memory Book (1998); Hennessy and Patterson, Computer Architecture: a Quantitative Approach (2.ed. 1996); Hwang and Briggs, Computer Architecture and Parallel Processing (1984); and Loshin, Efficient Memory Programming (1998); all of which are hereby incorporated by reference.
Cache Memory Operation and Implementation Choices
The above general discussion shows why a cache memory may be desirable in principle. However, there are significant variations possible in the implementation of cache memory. Some of the details of cache operation will now be reviewed, to show where important design choices appear.
When the CPU needs to read data, it outputs the address and activates the control signals. In a cache system, the cache controller will check the most significant bits of this address against a table of cached data. If a match is found (i.e. a xe2x80x9ccache hitxe2x80x9d occurs), the controller must find where this data lies in the fast memory of the cache. The cache controller blocks or halts the read from main memory, and instead commands the cache memory to output the contents of the physical address at which the correct data is stored.
In a direct-mapped cache system, each line of data, if present, can only be in one place in the cache memory""s address space. Thus, as soon as the cache controller detects a hit, it immediately knows what physical address to access in the cache memory SRAM. By contrast, in a fully associative cache memory, a block of data may be anywhere in the cache. The risk in a direct-mapped system is that some combinations of lines cannot simultaneously be present in cache. The penalty in a fully associative system is that the controller has to look through a table of all cache addresses to find the desired block of data. Thus, many systems use set-associative mapping (where a given block of data may be anywhere within a proper subset of the cache""s physical address space).
A set-associative cache architecture will commonly be described as having a certain number of xe2x80x9cways,xe2x80x9d e.g. xe2x80x9c4-wayxe2x80x9d or xe2x80x9c2-way.xe2x80x9d As with a direct-mapped cache architecture, the most significant bits of the address define which line in cache can contain the cached data. However, with set-associative cache architectures, each line contains several units of data. In a 4-way set-associative cache, each line will contain four xe2x80x9cways,xe2x80x9d and each way consists of tag bits plus the corresponding data bits.
If no match is found (i.e. a xe2x80x9ccache missxe2x80x9d occurs), the controller allows an access to main memory to continue (or begin). When the data is returned from main memory (which will typically require at least several CPU clock cycles), the CPU receives it immediately, and the cache controller loads it into the cache memory. The cache location used for new data may be randomly chosen, or may be chosen by computation of which data is least-recently used.
If a cache hit occurs, the cache controller must find where this data lies in the fast memory of the cache. The cache controller blocks or halts the read from main memory, and instead commands the cache memory to output the contents of the physical address at which the correct data is stored.
Caching in Direct-Memory-Access Svstems
Personal computer systems, unlike larger computer systems, have historically used a single-processor architecture. In such architectures, a single microprocessor runs the application software. (However, many other microprocessors, microcontrollers, or comparably complex pieces of programmable logic, have been employed in support tasks, particularly for I/O management.) By contrast, supercomputers, mainframes, and many minicomputers use multiprocessing systems. In such systems many CPUs are active at the same time to execute the primary application software, and the allocation of tasks is typically at least partly invisible to the application software.
Thus, personal computer designers have not needed to pay much attention to the data synchronization issues which can be so critical in larger systems. However, direct-memory-access is typically provided in personal computer systems, and presents some of the same issues as a true multiprocessing system.
One feature which rapidly became standard, in the early development of personal computer architectures, is direct memory access. If peripheral devices are allowed to access memory directly, then the CPU can perform other tasks while a long transfer of data is occurring. However, the possibility that data may be accessed independently of the CPU means that problems of data coherency may arise.
The simple approach to such problems of data coherency has been to use pure write-through caching operation. This avoids coherency problems, but means that write operations derive no benefit whatsoever from the presence of a cache.
Specifications of Cache Memory
The unit of data handled by the cache is referred to as a xe2x80x9clinexe2x80x9d of data. (For example, in the 486""s 8 KB on-chip cache, each cache line is 16 bytes long.)
Cache line size can impact system performance. If the line size is too large, then the number of blocks that can fit in the cache is reduced. In addition, as the line length is increased the latency for the external memory system to fill a cache line increases, reducing overall performance.
Memory Controllers (Cache Controllers)
Due to the complexity and criticality of caching and other memory management issues, a wide variety of custom VLSI integrated circuits for memory management have been offered by various chip vendors. One of particular interest is the Intel 82495XP Cache controller chip. This chip (which was originally developed for use with Intel""s 860 RISC processor) permits block-wise programmation of cache modes, so that cache modes can be assigned to different blocks of memory.
Texture Caching
A recurrent problem with texture mapping is the amount of data each texture map contains. If it is of high quality and detail it may require a substantial amount of storage space. The size of texture maps may be increased if mipmap filtering is supported. Simply moving textures from one physical storage location to another may be a time consuming operation. In a normal graphics system the time taken to transfer a texture from disk or system memory to the graphics system may be significantly more than the time taken to apply the texture. Network applications, in which the application and graphics system are on separate machines linked by a low bandwidth connection, aggravate this problem. Improvements can be made by caching the texture locally in the graphics system, but the time taken to transfer it just once may be prohibitive.
Caching would be particularly desirable for texture management in 3D graphics. The desirability for some form of texture caching is easily demonstrated by a simple calculation. If the target performance is to do trilinear filtering in a single cycle, then 8 texels per output fragment are required. If each texel is in true color (i.e. 32 bits per pixel), then the texture read bandwidth is 32 bytes per cycle, or (assuming a 100 MHz bus) 3.2 GB/s. With clever cache design this can be reduced to 1.25 texels read per pixel (assuming the texture maps are very much larger than will fit into the cache), i.e. 500 MB/s. (Note the trivial case where the texture maps fit into cache and are already loaded is an easy one to solve, but isn""t useful with real world scenarios.) Caching texture maps is not a new idea of itself, but previous implementations leave room for improvement.
Texture Caching With Background Preloading
The present inventor has realized that, in 3D graphics systems, loading fetched data into cache is itself a source of bottlenecks. Thus prefetching data is NOT enough to reliably maintain the necessary data transfer rate. The present invention provides preloading into cache, in addition to any prefetching operation which may be used.
As noted above, caching memory architectures have long been used in general-purpose computers. However, there turn out to be some surprising difficulties in using this idea in computer graphics (especially for texture memory). The present application discloses several innovations related to virtualization and caching of texture memory.
Notable (and separately innovative) features of the texture caching architecture described in the present application include at least the following: Expedited loading of texel data (preloading, not just prefetching); an improved definition of keys (rather than addresses) for Cache lookup; an innovative cache replacement policy.
Expedited Loading of Texel Data
When a cache miss occurs the simplest thing to do is to stall all the processing until the texture data has been returned. In GLINT chips the issuing of addresses (to read texture data when a cache miss has occurred) is separated from the actual filtering operations (which will use the texture data) by FIFOs. This allows the cache hit testing and address generation to proceed unhindered until the internal FIFOs fill (due to the memory taking too long to return the data). The texture filtering has to stall until all the data it need is available.
To generate an output filtered texel may take from one to eight memory reads to fetch all the data in (just because of alignment between the 8 texels and how the patched texture map is stored in memory), although it will normally be one or two reads once a steady state has been reached. Each memory read returns 16 bytes so, in general, once the data has been received for the stalled filter operation there is sufficient data for the following few filter operations as well.
On earlier chips the command (message) which instigates the filtering also records how much data is being read At the point the filter operation is about to be done it will be delayed while the data is read from the input FIFO and clocked into the cache. If the input FIFO is empty then you have no choice but to wait as the memory has been too busy servicing other request to deliver the texture data within the latencies allowed for by the FIFOs. Once the data has been clocked in into the cache the filtering is done. In this scenario the cost of loading the cache is amortised over the number of filter operations it provides data for, but is still an overhead we wish to avoid and must avoid if a sustained rate of 1 filtered texel per cycle is to be achieved.
Expedited loading of the cache allows the texture data read from memory to be loaded into the cache as soon as it is available rather than waiting for the filter operation (which requires the data) to occur. When this is working well it allows cache loads to be hidden under earlier filter operations rather than being an overhead on the instigating filter operation. The cache can load 16 bytes of data per cycle so its load performance is matched to the memory bandwidth.
An example might make this clearer. When doing bilinear filtering with a zoom ratio of 1:1 with 32 bit texels arranged in a 2xc3x972 patch (as they are normally for us) there will be a memory demand of:
4 0 2 0 2 0 2 0 2
for each filtered texel produced. Note that the initial read of 4 texels is the worst case at the start of a scan line and the cache is empty. If the cache is able to hold the texel data from the previous scan line then the pattern of accesses might be:
0 0 0 0 0 0 0 0 0
when all the data is supplied from cache or
2 0 1 0 1 0 1 0 1
or when one row of texel data is supplied from cache and the other row read from memory.
Looking at the worst case pattern of 4 0 2 0 2 0 2, etc. The filtering is stalled until the first 4 memory reads have returned data and it may take tens of cycles for the data to be returned. While the filtering is stalled the address generation has proceeded and the memory controller will (in consecutive cycles) start to return the 2 sets of data for every other output texel. The first 4 cache loads are done in 4 cycles and the first filter is done in the next cycle. The second output texel does not need any new data so is done in the following cycle. When the first and second output texels are being calculated these two cycles can be used to load up the two memory data required by the third output texel, thus when the third output texel is computed all the data it need is ready and waiting. This sequence carries on for subsequent texels.
What features are needed to make this work?
Although the cache loading is asynchronous to the texel filtering you do have to guard against two events: namely the filtering starting before its data has arrived and the cache load occurs too early and overwrites data which hasn""t been used yet.
The first event is handled by incrementing a counter every time a cache line is loaded and decrementing the counter by the number of cache lines a filter operation requires to be loaded before it can proceed. A filter operation is only allowed to proceed if the counter holds a value greater than or equal to the number of cache lines required by the filter operation.
The second event is much more difficult to handle. The two basic options are to delay the load if at the point the data arrives you have detected an outstanding filter operation references the cache line you are about to over write; or to delay issuing the memory read (and subsequently the corresponding cache line load) until an unused cache line is found. The solution we have used is the second one. Each read is tagged with the destination cache line it is going to be written to and before the read is issued all the outstanding filter operations (including the one we are currently working on) are checked to see if they include this destination cache line. If they do then we select another cache line to replace and do the tests again. The selection process carries on until a free cache line is found. Normally the cache line we first choose will be free so this is an efficient process. As the search proceeds cache lines will be automatically freed up as filter operations complete so we can always guarantee we will find a free slot.
The FIFO which holds the outstanding filter operations is searchable, i.e. each FIFO entry can be checked in parallel to see if it references the candidate cache line to replace. Each filter operation in the FIFO specifies the locations in the cache where its data is going to come from so the cache lines information is already present.
Most graphics operations are correlated to frame buffer location, and hence have a predictable locality of reference; but texture memory management is much more difficult.
In general, texture operations differ from other data transfer operations in that:
overall bandwidth can be very high;
individual reads are likely to be larger;
successive accesses show strong locality of reference (possibly multi-locality); and
there are no writes (all reads).
However, the transforms used in 3D graphics cause serious difficulty in managing texture memory. Suppose that the texture map is linear, and that rasterization is proceeding in a linear path through the frame buffer: the successive accesses to the texture can occur AT ANY ANGLE in the texture coordinate space. (Indeed, the path defined by these accesses will also be slightly curved!) Thus even though texture accesses tend to exhibit strong locality of reference, this curved path makes optimal prediction of location very tough.
To manage texture accesses under these conditions, a fully associative cache architecture would be best (since there is no relationship between position in texture memory and location in frame buffer) but a direct-mapped cache is simple and cheap to implement.
There are two driving problems with texture preloading:
1) Stalling on a cache miss causes delay in the whole system; pre-fetches have been a partial solution to this problem. However, caching issues with texture operations are different from the caching issues with other graphics read operations. In non-texture operations we""re usually reading only a small amount of data at a time. Texture data handling issues are different because it can take up to four cycles to load EACH call; up to EIGHT cycles if you""re doing trilinear filtering (two textures).
It is still true that, if the active step TOTALLY beats the retrieval, you just have to wait. However, otherwise, we can allow the data to flow straight into memory WITHOUT waiting for the active step.
2) Data going early to cache CANNOT be allowed to overwrite valid data which is already referenced by queued up commands, but not yet used. This is a key concept: the problem of a later step""s data corrupting an earlier step""s valid data might be referred to as xe2x80x9cpatricidexe2x80x9dxe2x80x94and the embodiments disclosed below avoid this problem.
To avoid the problem of patricide, the preferred embodiment will not issue a memory-read message until there is a cache line available. Preferably a cache line is assigned as soon as we request a cache load. When we have a miss we can decide which cache line to go into. Note the diagram in FIG. 10: the upper part of this diagram shows the organization for texture virtual memory management, and the bottom part shows the organization for texture caching.
Of course, before the process stalls, the on-chip FIFO (the M-FIFO) can be checked to see if it has the data you need.
Optionally, some dithering can be added into the cache assignments, to avoid over-concentration within the cache.
Note that per-patch fetching (with locality) means that multiple misses can be fixed with one cache line load, so a little buffering in the fetch requests adds efficiency. (In principle larger patches would be better for this, but too large patches waste bandwidth.)
Without this invention, data which has been prefetched but not preloaded would typically be sitting in a FIFO. Thus the present invention provides a further improvement in throughput, by optimizing a feature which previously was little regarded.