A typical static random access memory (SRAM) is broken down into several divisions and several banks. SRAMs are often used in tag memory applications to store the upper address bits of main memory data locations indicating which pieces of data from the main memory are stored in the cache memory. When a particular piece of data is to be retrieved, the address of the data is compared to the addresses stored in the tag memory. If the data address matches the address stored in the tag memory, a "cache hit" occurs and the data is retrieved from the cache memory. Since the SRAM cache is generally faster than the main memory, overall system performance is generally enhanced. In a situation where there is a cache miss (i.e., an attempt to access a main memory location that is not in the cache) it is necessary to fetch the memory location from the main memory and place it in the cache so that the data at this address is readily accessible for the next access. This process is generally called a cache line fill. To accomplish a cache line fill, it is necessary to write the upper address bits of the main memory location into the tag. In this case the least recently used (LRU) bit in the tag determines where in the cache to write the data so that there is a minimum impact on system performance. Whenever there is a cache read hit, the LRU bit is updated.
The two methods of updating data stored in cache memory during a WRITE operation are generally referred to as "write through" and "write back". For a write through, the information is simply written to the main memory at the same time it is written to the cache. However, the write through process generally takes longer to accomplish than writing to the cache since the main memory is generally slower. During a write back, the data is written to the cache and the main memory is only updated as needed. Generally, the tag must keep track that a particular address location in the cache is a dirty location, indicating it has been changed and is not the same as the corresponding main memory location.
Referring to FIG. 1, a circuit 10 is shown illustrating a write control system. The circuit 10 generally comprises a set of write buffers 12a and 12b, a pair of global or write control blocks 14a and 14b, a number of tag groups 16a, 16b, 16c and 16d, a number of tag groups 18a, 18b, 18c and 18d and a number of decode and local write control logic blocks 20a.about.20h. The tag groups 16a.about.16d and 18a.about.18d are subdivisions of the tag RAM address space. The write control block 14a is shown presenting a signal to the tag groups 18a.about.18d while the write control 14b is shown presenting a signal to the tag groups 16a.about.16d. The tag groups 16a.about.16d generally represent a bank "0" while the tag groups 18a.about.18d generally represent a bank "1" where each bank is a functional subdivision of the tag bits (e.g., LRU, dirty and tag address data). A separate write control block 14 is generally required for each bank (i.e., bank "0" and bank "1"). The write control 14a generally receives a write enable signal WE1 while the write control block 14b generally receives a write enable signal WE2. If each of the tag groups 16a.about.16d and 18a.about.18d are single functioning locations, only two write control blocks 14a and 14b may be required. However, if any of the particular tag groups 16a.about.16d or 18a.about.18d are implemented as multi-use bits, additional write control blocks 14 will be required. The larger the number of different multi-use bits present, the greater the number of write control blocks 14 that will be required. In a system where five global write controls would be required, the overhead in replicating all of the control lines from the write control blocks 14a.about.14b would be excessive.