Certain data processors store data using error correcting codes (hereafter simply ECCs). A data processor using an ECC generates a symbol each time the data processor stores data in an associated memory system. Each symbol contains a first and a second subset of bits. The first subset of bits form the data byte, half-word, word, etc. desired to be stored by the data processor. The second subset of bits are generated by the data processor and are a predetermined function of the first subset of bits. When the data processor needs a particular data word, half-word, word, etc., it retrieves the symbol whose first subset of bits is the desired data word, half-word, word, etc. The data processor extracts the first subset of bits from the retrieved symbol, generates a second subset of bits using the same predetermined function, and combines the two subsets to form a new symbol. The data processor then compares the retrieved symbol and the new symbol. The differences between the two symbols indicate if a data storage error occurred.
ECC protocols are characterized by the number of bit errors each is able to correct and the number of bit errors each is able to detect. For instance, a particular ECC protocol may be a single bit correcting-double bit detecting protocol. This protocol can detect and correct any single bit reversal within the symbol that occurs between the time of symbol storage and the time of symbol retrieval. This protocol can also detect if any two bits within the symbol flip logic states, though it cannot correct the error. The symbol is discarded when it is known to contain an error but is not susceptible to correction. This exemplary protocol cannot detect if three or more bits flip logic states. The size of the second subset of bits relative to the first subset of bits determines the number of detectable and correctable bits for each ECC protocol. The greater the number of bits within the second subset, the greater is the range of errors that the protocol can detect and correct. Generally, a particular protocol is selected so that the likelihood of undetectable errors is sufficiently small but the increase in memory storage requirements for the second subset of bits is manageably small.
Known data processing systems that use ECC protocols face a design compromise. These systems either (1) delay transmission of the data to the ultimate data user until the ECC protocol is performed or (2) immediately use the data before it is completely processed pursuant to the ECC protocol. In the first case, an extra cycle delay is introduced into the data path. Oftentimes the data input/output path is already a critical speed path in a data processing system. This extra delay only worsens the critical speed path. Furthermore, data errors of the type targeted by ECC protocols are relatively rare events. Therefore, the first case is a slow solution optimized for the infrequent case. In the second case, the data processor assumes the data it receives is correct, as is normally the case. However, the data processor must be designed with complex subsystems that may "undo" the acts caused by bad data when it is received. For instance, a reversed bit will change the meaning of a fetched instruction or its operand. The execution of this instruction will not produce the intended result. The data error may be compounded if the incorrect instruction is a branch instruction or should have been a branch instruction and was incorrectly modified. In these instances, the data processing system will begin executing instructions along a second incorrect instruction thread. Such a data processing system may proceed with the correct data only after it has restored its state to the state existing immediately before the data processing system received the bad data. Therefore, the second case is an expensive solution optimized for the frequent case.