Most communication channels suffer from noise, interference or distortion due to hardware imperfections, physical limitations, or constraints on transmitted power, among other possible factors. As a result, less-than-perfect transmission of data can lead to erroneous or missing data at the receiving end. Errors (including missing data) at the receiving end, in turn, may degrade (or at least impact) the quality of communications. One common way to address the problem of imperfect communication channels is to introduce error-control coding.
The goal of error-control coding is to encode information in such a way that even if the channel (or possibly a storage medium) introduces errors, the receiver can correct the errors and recover the original transmitted information. The basic idea behind error correcting codes is to add a certain amount of redundancy to the message prior to its transmission through the noisy (or otherwise imperfect) channel. This redundancy is basically some extra information that is added in a known manner. The encoded message (or data) when transmitted through the channel might get corrupted due to noise in the channel. However, at the receiver, the original message can be recovered from the corrupted one if the number of errors is within a limit for which the coding strategy has been designed. Thus, by adding redundancy intelligently, the effect of random noise can be diluted to some extent.
Error-control coding is a discipline under the branch of applied mathematics called Information Theory, developed by Claude Shannon in 1948 (C. E. Shannon, “A Mathematical Theory of Communication,” Bell System Technical Journal, vol. 27, pp. 379-423, 1948). Prior to Shannon's work, conventional wisdom held that channel noise prevented error-free communications. Shannon proved otherwise, showing that channel noise limits the transmission rate, not the error probability. Specifically, Shannon showed that every communications channel has a capacity, C (measured in bits per second), and as long as the transmission rate, R (also in bits per second), is less than C, it is possible to design a virtually error-free communications system using error control codes. Shannon's contribution was to prove the existence of such codes, not the method to find them.
Most error-control techniques address random errors and work at the bit level. The basic mechanism takes the following form. A block encoder takes a block of k bits and replaces it with an n-bit codeword (n>k). For a binary code, there are 2k possible codewords in a codebook. After the channel introduces errors, the received word can be any one of 2n n-bit words of which only 2k are valid codewords. The job of the decoder is to find the codeword that is closest to the received n-bit word.