1. Field of the Invention
This invention is related to processors and, more particularly, to cache error handling in multithreaded processors.
2. Description of the Related Art
Presently, typical processors are single-threaded. That is, the instructions that are being executed concurrently in the processor all belong to the same thread. Instruction fetching in such processors generally involves fetching instructions from the single thread. In various implementations, branch prediction schemes may be used to control fetching or sequential fetching may be implemented. In either case, fetching may be redirected (if a branch misprediction occurs, or for a taken branch in the sequential fetch implementation, or for an exception, trap, etc. in either case).
Most present processors implement an instruction cache to store instructions for rapid fetching by the processor and a data cache to store data from memory that may be used during instruction execution (e.g. as operands of instructions). The cache may be implemented using on-chip or off-chip random access memory (RAM). Such memory is susceptible to soft errors that may be caused by alpha particle collisions, noise in the system, power supply variations, etc. Additionally, hard errors due to a failure in the memory may occur. Typically, a cache allocates and deallocates storage in contiguous blocks referred to as cache lines. That is, a cache line is the minimum unit of allocation/deallocation of storage space in the cache.
In some cases, caches may implement some form of error detection scheme to protect against errors in the stored data. Typically, the caches may store detection data (e.g. a parity bit, error checking code (ECC) bits, etc.) that may be used in conjunction with the stored data to detect at least some errors.
More recently, multithreaded processors have been proposed. Particularly, in fine grain multithreading, the processor may have two or more threads concurrently in process. Instructions may be issued from any of the threads for execution. Thus, in some cases, instructions from different threads may be in adjacent pipeline stages in the processor. Since multiple threads are being fetched and executed, the handling of errors in cache accesses may be more complex.