The present invention relates to data caching in general, and more particularly to a second-level (L2) cache that services read requests out of order.
Modern graphics processing circuits process incredible amounts of data to generate detailed graphics images for games and commercial applications. Textures are one type of this data. Textures are the surface patterns on structures in a graphics image. They are made up of individual texels, and often several texels contribute to each pixel of an image.
Texels are processed in a graphics processor by texture filters and shaders. A texture cache stores textures until needed by a filter and shader. But memory space is limited; these caches cannot store every texel that may be needed. Accordingly, a higher level cache is used; this is referred to as a second-level or L2 cache. If the texture cache does not have a needed texel, it requests it from the L2 cache. But the L2 cache is also limited; when it does not have a requested texel, it retrieves it from a graphics memory. When a texture cache requests data from the L2 cache, if the data is present, the result is an L2 cache hit. If the data is absent from the L2 cache, an L2 cache miss is said to occur.
When an L2 miss occurs, the L2 cache requests data from the graphics memory via a frame buffer interface. The return trip for this request can be hundreds of clock cycles. By comparison, a hit can be serviced very quickly since the data is already present in the L2 cache. But conventional texture cache designs require data to be returned in its requested order. Since the time to service a miss is long, subsequent hits may be stalled behind an earlier miss. Because of this, many cache circuits artificially slow the response to a hit, or use complicated logic to reorder requests to their original sequence.
For example, a first request may be a miss. While the first request is retrieved from a graphics memory, a second request that is a hit may be received. It is undesirable to have the second request delayed unnecessarily. This is particularly true when an L2 cache is used to service requests from more than one texture cache; different texture caches may have made the first and the second requests. In such a case, the texture cache making the second request has no reason to wait for the first request to be serviced.
Thus, what is needed is an L2 cache that can service requests in an out-of-order fashion.