1. Field of Invention
This invention relates generally to digital data communication systems, particularly to the arithmetic of finite fields. More particularly, the invention relates to the decoding of error correcting codes.
2. Related Art and Other Considerations
In a digital data communication system (including storage and retrieval from optical or magnetic media) it is necessary to employ an error control system in order to increase the transfer rate of information and at the same time make the error rate arbitrarily low. For fixed signal-to-noise ratios and fixed bandwidths, improvements can be made through the use of error-correcting codes.
With error-correction coding the data to be transmitted or stored is mathematically processed to obtain additional data symbols called check symbols or redundancy symbols. After transmission or retrieval, the received data and check symbols are mathematically processed to obtain information about locations and values of errors. Since it is most efficient to treat the data symbols as members of a finite field rather than the familiar fields of real or rational numbers, finite field arithmetic is used.
One, problem arising in finite field arithmetic is the complexity of performing division as compared to the complexity of performing addition, subtraction or multiplication. Division can-be done by taking the multiplicative inverse .of an element followed by a multiplication. A simple method of inversion would result in a simple method for division.
There are several methods of performing inversion in GF(2.sup.m). The most common is the use of a look-up table, equivalent to a 2.sup.m by m ROM. This method is the fastest and requires the most circuit gates. In many applications it is desirable to process m-bit symbols serially to reduce circuit size. In such applications it would be desirable to perform additions, multiplications and inversions of m-bit symbols in m clock cycles. Several such methods acre described.
FIG. 1 shows a circuit which employs the following method: If a is an element of GF(2.sup.m) then ##EQU1##
The inverse can be obtained by repeated squaring and multiplying. However the squaring and multiplication functions must be performed in one clock cycle which causes excessive circuit size. In FIG. 1, R1 is initialized to "a" and clocked once to obtain a.sup.2, at which point R2 is initialized to 1. After m-1 more clocks R2, contains a.sup.-1.
Another method uses the Euclidean algorithm and is described in Berlekamp's Algebraic Coding Theory", published by Aegean Park Press, revised 1984 edition, pp. 36-44. It uses 3m+4 flip-flops and requires 2m+2 clock cycles.
Yet another method, shown in FIG. 2, is described in Whiting's PhD dissertation for the California Institute of Technology entitled "Bit-Serial Reed-Solomon Decoders in VLSI," 1984, p. 58. In the Whiting technique, R is initialized to the element "a" expressed in its dual basis representation (see p. 39 of Whiting) and f(x) is a logic function which produces bit 0 of the dual basis representation for a.sup.-1. Repeated multiplication of a by .alpha..sup.-1 allows f(x) to produce the remaining bits of a.sup.-1. The function f(x) can be implemented by a 2.sup.m by 1 ROM. The inverse is produced serially in m clocks.
However, these techniques have the limitation that they require either excessive circuitry or excessive clock cycles to produce an inverse. Thus there is a need for an approach that reduces the size and time requirements for finite field inversion.