In modern data processing and communication systems, it is often desirable to transfer data between separate and often disparate devices. A common problem arises in such systems when data transferred from a sending device (or "source") to a receiving device (or "receiver") is distorted in some way before reaching its destination. The distortion has various possible causes, including environmental noise (in the case of radio frequency systems), defective or noisy transmission lines, or a defective data channel, or "bus". The later is a particularly common source of error in digital systems. Since there is no currently available means of ensuring 100% error-free data transfer, the system designer must endeavor to provide means for detecting errors when they occur and taking appropriate action.
Two well known error detection methods employed are the checksum and CRC methods. Both involve computing a "signature" for blocks of data words to be transmitted. As a block of data words (or "data set") is sent over the bus, means are provided at the source end of the bus for computing a quasi-unique word representing the data set. This quasi-unique word is called a "signature". When computed at the source end of the bus, it is called an "expected signature" ("SE"). A similar signature is computed at the receiver end of the bus. This is referred to as an "error detection signature" ("S") and is compared to the expected signature. When SE and S are different, it is assumed that there is an error in the received data and the data is retransmitted. Contrary to expectation, however, when SE and S are equal this does not necessarily indicate that the transmission was error-free. This is due to the fact that error detection probability generally decreases as the complexity of the error detection method decreases.
For example, in the prior art checksum method, groups of data words are summed without regard to overflow. As the following example demonstrates when an overflow occurs, otherwise detectable errors may be lost.
Assume that a data set comprises three four-bit words "A", "B" and "C". Let A = "1000", B = "0100", and C = "0010". The checksum expected signature SE for the set A, B, C is:
______________________________________ A "1000" + B "0100" + C "0010" SE "1110" ______________________________________
Assume now that due to problems with the data bus, the most significant bit ("MSB") of each word is "stuck" at "1". This will not affect A but will cause errors in the received values of B and C. Denoting the received data words A', B', C' respectively, the error correction signature S' is computed as follows:
______________________________________ A' "1000" B' "1100" + C' "1010" S' "1110" ______________________________________
Thus, despite a significant error in the received data, SE and S' are equal and no error will be detected.
In the prior art CRC ("cyclic redundancy code") method known sequences of words are divided by a polynomial constant and the remainders thereof accumulated to form an expected signature. See for example, Scewiorek and Swarz, "The Theory & Practice of Reliable System Design", pp. 101 et. seq., Digital Press, 1982. The principal disadvantage with the CR method is that the signatures it produces are dependent upon the order in which the data words are sent. The sequence ABCD produces a different signature then BACD. Thus, for example, this method would cause difficulties in a graphics system in which the sequence that data is read from graphics circuitry to produce pixels on the display can vary.
It is therefore desirable to provide an error detection signature apparatus and method that has higher error detection probabilities than the prior art and will detect errors independent of the order in which data words are sent, but is relatively uncomplicated and easy to implement. The present invention achieves these goals.