Techniques to detect and correct soft or hard errors in digital data bit strings introduced during electronic operations such as storage, transmission or processing are well known in the computer industry. It is now a common practice to include bits, such as those characterized by the Hamming code, in data undergoing processing to ensure that all of the bits of a string of digital data correctly represent the information, or at minimum to detect that the information has errors.
The general practice of error detection and correction involves the inclusion of special bits, checkbits, to the string representing the actual data, and then the use of the combined bit string to detect and correct errors through syndromes and error pointers. The ultimate goal is to detect the presence of errors in the bit string of the data and to identify the error locations for corrective action.
The use of error correction is particularly prevalent in digital data transmission systems and storage systems, where time or external physical phenomena are likely to cause spurious or soft errors in the data bits. Error correction is equally valuable in handling hard errors, such as may be caused by permanent device failures. A specific example, to which the embodiment of the present invention is directed, involves the storage of data in integrated circuit dynamic memory devices (DRAMs) which by virtue of their miniaturization are suspectable to a mix of errors.
A variety of different error correction code techniques are known and in use. Representative examples include applications involving a single long data word with single bit error correct and double bit error detect capability, two (or more) short data words with single bit error correct and double bit error detect capability, and a single long data word with multiple bit error correct capability. Translation from one form to another requires conversion of the associated checkbits.
An important aspect of error correction code use involves the efficient conversion of data stored or transmitted in error correction coded bit strings. For example, it is not unusual to combine two or four 32-bit long words into 64 or 128-bit strings, respectively, for purposes of storage or transmission over wide buses or radio frequency media, and at a later stage to partition the extended bit string back into the 32-bit words. A strong motivation for managing data in this format arises from the fact that error detection and correction can be accomplished more efficiently, in terms of checkbits per data bits, as the length of the data string increases.
For example, a single 64-bit string with single bit correction and double bit detection requires a total of 72 bits (64+8). On the other hand, two 32-bit strings with individual single bit correction and double bit detection require a total of 78 bits (32+7 and 32+7).
With the recognition that integrated circuit memory devices are suspectable to soft errors, data is preferably stored in DRAM using groups of two or four 32-bit words, respectively 64 or 128-bit combinations, together with the associated checkbits. For example, combining two 32-bit words into a single 78-bit combination having 14 checkbits allows four data bit errors to be corrected. In contrast, if each of the 32-bit strings were managed with respective 7 checkbits the best that could be accomplished is one error correction and one error detection per 32-bit word. Error correction is a significantly more valuable capability than mere error detection. Thus, in the context of data storage where errors are likely, it is more efficient to combine words for purposes of storage. A similar preference applies to the transmission of data in longer bit strings over buses, where wide buses are preferred from a layout and data transfer rate perspective. On the other hand, when data is being moved on or off integrated circuit chips, pin limitations motivate the use of shorter bit strings of data.
Conventional systems for converting data between long and short bit strings utilize architectures which correct the errors in the first data bit string and then use the corrected data bit string to generate the checkbits for the data in converted form. Unfortunately, this sequence typically requires processing through 13 or so levels of integrated circuit device logic to accomplish both the correction and checkbit generation operations. Given the importance of efficient bit management, accentuated need error correction, and data transfer speed, a need exists for accomplishing this conversion using the fewest levels of logic possible.