There are communication systems that rely on the synchronization of transmitter and receiver clocks to correctly transfer data payloads. One class of such systems involves transmitters and receivers separated by large distances, such as satellite communications with a ground station. The clocks at the transmitters and receivers may drift occasionally, causing single bits to be inserted or deleted between the data payload the transmitter tried to send and what is received.
FIG. 1A shows an example of a prior art message 2 including a data payload 4 of N bits and a checksum 6 of J bits as in a Cyclic-Redundant Code (CRC). Such prior art error correction schemes assume the received checksum to be uncorrupted in almost all bit positions.
If the bits of the message 2 are treated alike, then insertion or deletion of a bit is equally probable at any bit location and the received data payload and checksum may be corrupted in many if not almost all bits. FIGS. 1B and 1C show the central problem with most prior art error correction-detection schemes, when an early bit is deleted from such a message as in FIG. 1B or inserted as in FIG. 1C, these schemes see all the bits that follow as wrong, making error correction impractical.
While there have been sporadic research results for error correction in the presence of bit insertions and deletions, systematic error correcting codes of more than 8 bits have proved computationally difficult, according to a recent survey article by N. J. A. Sloane, “On Single-Deletion-Correcting Codes”, last updated 2002. As mentioned in that article, error correction for a channel inserting single bits is very similar to what was surveyed for deleting single bits. A message protocol is needed that can minimize the damage from bit insertion or bit deletion noise for messages much longer than today's 8 bit limitation discussed in Sloane's article.