1. Field of the Invention
The invention relates to cache memory systems in computer systems, and more particularly to an apparatus which flushes the cache memory when certain memory sensitive operations occur.
2. Description of the Related Art
The computer industry is a vibrant and growing field that continues to evolve as new innovations occur. The driving force behind this innovation has been the increasing demand for faster and more powerful computers. A major bottleneck in computer speed has historically been the speed with which data can be accessed for memory, referred to as the memory access time. The microprocessor, with its relatively fast processor cycle times, has generally had to wait during memory accesses to account for the relatively slow memory access times. Therefore, improvement in memory access times has been one of the major areas of research in enhancing computer performance.
In order to bridge the gap between fast processor cycles times and slow memory access times, cache memory was developed. A cache is a small amount of very fast, and expensive, zero wait state memory that is used to store a copy of frequently accessed code and data from system memory. The microprocessor can operate out of this very fast memory and thereby reduce the number of wait states that must be interposed during memory accesses.
The management or control of the cache is generally performed by a device referred to as a cache controller. The cache controller is principally responsible for keeping track of the contents of the cache as well as controlling data movement into and out of the cache. Another responsibility of the cache controller is the preservation of cache coherency, which refers to the requirement that the copy of system memory held in the cache be identical to the data held in system memory. In addition, the cache controller is responsible for determining which memory addresses are capable of residing in the cache, referred to as cacheable addresses. Certain segments of addressable memory may not be allowed to reside in the cache due to cache coherency or other considerations. For example, memory that is read only or write protected is sometimes designated as non-cacheable to prevent these locations from being modified in the cache. The cache controller is therefore responsible for preventing data from non-cacheable addresses from being placed in the cache.
There are generally two types of cache memory systems: write-through and write-back. In a write-through cache design all writes are often stored in the cache and are always broadcast to the memory. In a write-back cache design, the writes are performed only to the cache, with the cache only providing the information to the system when another party requests the address. Thus, when a write hit occurs in a write-back cache, the cache location is updated with the new data, but the write operation is not broadcast to system memory. In this instance, the cache holds a modified copy of the data and assumes the responsibility of providing this modified copy to other requesting devices. When the cache holds a modified copy of data, the respective memory location in system memory is now said to hold incorrect or dirty data. Therefore, in a write-back cache, the cache controller is required to snoop the system bus when it does not have control of the system bus to determine if other devices request memory locations where the cache holds a modified copy of data at the respective location. If so, the cache controller must write back the modified data to system memory so that the requesting device can receive the correct copy of data. Also, when a cache flush occurs, a write-back cache must write back all modified locations to system memory when the flush occurs.
Background on memory relocation schemes in computer systems is deemed appropriate. A common method of maximizing system efficiency is to copy the basic input/output system (BIOS) from read only memory (ROM) into dynamic random access memory (DRAM). Because the RAM is often 32 bits wide and the ROM is only 16 bits wide, and because the memory access time for RAM is much shorter than the access time for ROM, the memory access time is greatly enhanced. One common method for accomplishing ROM relocation is to copy the ROM data to RAM located in high memory. The memory map is then altered to enable the high memory RAM to be addressed where the ROM was previously addressed. For example, in computer systems developed by Compaq Computer Corporation, the system ROM-BIOS is originally located at memory address E0000h to FFFFFFh. After power-up of the computer system, the code in the system ROM is copied to memory address FE0000h to FFFFFFh. The physical memory block starting at memory address FE0000h may then be remapped to memory address E0000h to FFFFFh, where the ROM was originally addressed.
When the BIOS is stored in ROM, the BIOS cannot be changed by software. However, after the BIOS is copied into RAM, the BIOS can be changed by writing new data to the RAM. Therefore, it is a common practice to designate the portions of the RAM in which the BIOS is stored as write-protected to prevent software from modifying these locations. The respective areas of RAM are write-protected by setting a bit referred to as a write-protect bit in a status register associated with the memory. When the write protect bit is set, write operations to that area do not alter the stored data and information sent to write-protected areas is usually lost. Nonetheless, some software programs or users are able to alter the contents of the BIOS stored in write-protected RAM. In order to do so, a user simply clears the write protect bit, changes the RAM where the BIOS is located, and then resets the write protect bit after the changes have been completed.
In order to further enhance system performance, portions of the BIOS may be cached so that frequently used portions of the BIOS are immediately available to the microprocessor through the cache. However, if the cache controller does not understand write-protection and for those areas caches only reads and not writes, then problems may develop. If the user were to write to the write-protected area, the cache controller would cache the location but the data would never actually be stored at the main memory location. This is true of both write-through and write-back caches. This then results in a cache incoherency problem. Because the cache controller does not understand write-protection, the previous way to resolve the write protection problem was to indicate that the write-protected memory areas were noncacheable. But this provided a large performance drop because the area, especially the BIOS area, was not being cached.
The i486 from Intel Corporation (Intel) had an internal cache with such problems. The C5 or 82495 cache controller from Intel, which was designed as a secondary cache to the i486, was aware of write-protected areas, and indeed cached the write-protect status for each location. However, to resolve the cache coherency problem mentioned above, whenever a write-protected area was accessed, the C5 cache controller would cache the data in the external cache on reads, but would indicate the location was noncacheable to i486. While this was a slight improvement in that access need only be made to the secondary or external cache, the internal cache on the i486 still was not caching the location and there was a performance degradation. It is therefore desirable to be able to utilize a first level cache system which does not understand write-protected areas and a second level cache system which does understand write-protection but does not allow the first level cache to cache those areas so that all read operations would be cached in the first level cache as well as the second level cache but incoherencies would not develop on write cases.