1. Field of the Invention
This invention is related to the field of computer systems and, more particularly, to graphics features of the computer systems.
2. Description of the Related Art
Superscalar microprocessors achieve high performance by executing multiple instructions per clock cycle and by choosing the shortest possible clock cycle consistent with the design. On the other hand, superpipelined microprocessor designs divide instruction execution into a large number of subtasks which can be performed quickly, and assign pipeline stages to each subtask. By overlapping the execution of many instructions within the pipeline, superpipelined microprocessors attempt to achieve high performance.
Superscalar microprocessors demand low memory latency due to the number of instructions attempting concurrent execution and due to the increasing clock frequency (i.e. shortening clock cycle) employed by the superscalar microprocessors. Many of the instructions include memory operations to fetch (read) and update (write) memory operands. The memory operands must be fetched from or conveyed to memory, and each instruction must originally be fetched from memory as well. Similarly, superpipelined microprocessors demand low memory latency because of the high clock frequency employed by these microprocessors and the attempt to begin execution of a new instruction each clock cycle. It is noted that a given microprocessor design may employ both superscalar and superpipelined techniques in an attempt to achieve the highest possible performance characteristics.
Microprocessors are often configured into computer systems which have a relatively large, relatively slow main memory. Typically, multiple dynamic random access memory (DRAM) modules comprise the main memory system. The large main memory provides storage for a large number of instructions and/or a large amount of data for use by the microprocessor, providing faster access to the instructions and/or data than may be achieved from a disk storage, for example. However, the access times of modem DRAMs are significantly longer than the clock cycle length of modern microprocessors. The memory access time for each set of bytes being transferred to the microprocessor is therefore long. Accordingly, the main memory system is not a low latency system. Microprocessor performance may suffer due to high memory latency.
In order to allow low latency memory access (thereby increasing the instruction execution efficiency and ultimately microprocessor performance), computer systems typically employ one or more caches to store the most recently accessed data and instructions. Additionally, the microprocessor may employ caches internally. A relatively small number of clock cycles may be required to access data stored in a cache, as opposed to a relatively larger number of clock cycles required to access the main memory.
Low memory latency may be achieved in a computer system if the cache hit rates of the caches employed therein are high. An access is a hit in a cache if the requested data is present within the cache when the access is attempted. On the other hand, an access is a miss in a cache if the requested data is absent from the cache when the access is attempted. Cache hits are provided to the microprocessor in a small number of clock cycles, allowing subsequent accesses to occur more quickly as well and thereby decreasing the effective memory latency. Cache misses require the access to receive data from the main memory, thereby increasing the effective memory latency.
The microprocessor is typically configured to operate upon a variety of data types, many of which are cached to reduce the memory latency of access to those data types. Unfortunately, certain data types are typically operated upon in a manner which may increase average effective memory latency. For example, graphics data and operation thereon by the microprocessor may deleteriously affect memory latency in several ways. Generally, a relatively large amount of graphics data (in comparison to the amount of available cache storage) is operated upon in a repetitive fashion by the microprocessor, and then transferred to a graphics controller for display upon a computer monitor. Like other data operated upon by the microprocessor, the graphics data may be transferred into the data cache within the microprocessor as it is being operated upon. The graphics data thereby displaces other data stored in the cache. Additionally, since the data is repeatedly transmitted between the microprocessor and the graphics controller (or memory, since the graphics image typically may be substantially larger than the cache and hence operation on one portion of the image displaces another portion of the image within the cache) as various changes to the display are performed, a large amount bandwidth is consumed for the transfer of graphics data. If the consumption of bandwidth in this fashion interferes with the transfer of other types of data, memory latency may be lengthened by the lack of available bandwidth for transfer of that other data. Accordingly, it is desirable to reduce the number of times the graphics data is transferred into and out of the microprocessor.