1. Field of the Invention
The present invention relates to the technical field of electrical engineering, and the present invention relates particularly to a device and a method for determining a position of a bit error in a bit sequence.
2. Description of the Related Art
Errors occurring when transmitting data or storing data should be corrected as far as possible. Often errors can be assumed to occur rather infrequently. In this case, it is useful to implement a coding algorithm that is able to correct a maximum number of single bit errors in a specific data unit.
The problem of the single bit error correctability is usually solved by the implementation of a suitable coding algorithm. Mostly, linear codes are used for that. They can be implemented in hardware in a particularly convenient manner.
In a systematic code, k information bits a1a2 . . . ak are complemented by n−k check bits ak+1ak+2 . . . an to generate a code word c=a1a2 . . . an of the length n. This yields
  c  =                              a          1                ⁢                  a          2                ⁢                  a          3                ⁢        …        ⁢                                  ⁢                  a          k                            ︸                  information          ⁢                                          ⁢          bits                      ⁢                  ⁢                                        a                          k              +              1                                ⁢                      a                          k              +              2                                ⁢          …          ⁢                                          ⁢                      a            n                                    ︸                      check            ⁢                                                  ⁢            bits                              .      
The set C of all code words is a subspace of Fn2, wherein F2={0,1} defines the binary field. If C⊂F2n is a linear subspace of F2n with the dimension k, this is referred to as a (binary systematic) linear (n,k) code. Here, only the binary codes will be discussed. Particular attention will be paid to codes in which n and k are arbitrary. For illustrative reasons, an exemplary linear (n,k) code with n=7 and k=4 is used.
A linear (n,k) code can be described unambiguously by its parity check matrix H. The parity check matrix H is a binary (n−k)×n matrix of the rank n−k. It has the form H=(A, In−k), wherein In−k denotes the (n−k)×(n−k) unit matrix. The unit matrix has ones in the main diagonal and otherwise zeroes. The row vector CεF2n is a code word when, and only when, equation (1)HcT=0  (1)applies. Here cT represents the transpose of c. If c is a row vector, then cT is a column vector.
In the following, the common nomenclature is adopted. If v=(v0, v1, . . . , vn−1) is a row vector, then the transpose vT of v represents the corresponding column vector
      v    T    =      (                                        v            0                                                            v            1                                                ⋮                                                  v                          n              -              1                                            )  
In this context, vεF2n and vTεF2n. Furthermore, if A is an m×n matrix, then the transpose of A, indicated by AT, is the n×m matrix whose jth column is the transpose of the j-row of A for 1≦j≦m. For example, if
      A    =          (                                    110                                                011                              )        ,            then      ⁢                          ⁢              A        T              =          (                                    10                                                11                                                01                              )      
Furthermore, this description uses the sign “+” as an indication of an XOR operation. This means that 0+0=0, 0+1=1, 1+0=1 and 1+1=0.
As an example, a linear (7,4) code is used which is characterized by its parity check matrix in equation (2)
                    H        =                  (                                                    1011                                            100                                                                    1101                                            010                                                                    1110                                            001                                              )                                    (        2        )            
It is to be noted here that the first four columns of H are formed by the matrix A, whereas the last three columns are formed by the unit matrix I3. c=(1,1,0,0,1,0,0) can be shown to be a code word, because HcT=(0,0,0)T yields the zero vector.
The coding of a linear code can be performed as follows. If H=(A, In−k) is the parity check matrix of a binary linear (n,k) code, then the k×n matrixG=(Ik,AT)is referred to as canonical generator matrix of the code. The coding of a message a=a1a2 . . . ak into the corresponding code word c=a1a2 . . . akak+1 . . . an is realized via a matrix multiplicationaG=cEquivalently also:
                                          a            1                    ⁢                      a            2                    ⁢          …          ⁢                                          ⁢                      a            k                          ︸                    information        ⁢                                  ⁢        bits              ⁢                  ⁢          A      T        =                                          a                          k              +              1                                ⁢          …          ⁢                                          ⁢                      a            n                          ︸                    check        ⁢                                  ⁢        bits              .  
Based on the continued example, this will be explained in more detail. For the parity check matrix H from equation (2), the corresponding 4×7 matrix
  G  =      (                            1000                          111                                      0100                          011                                      0010                          101                                      0001                          110                      )  may be identified as corresponding generator matrix.
The message (a1, a2, a3, a4)εF24 is coded into the code word c byc=(a1,a2,a3,a4)G=(a1,a2,a3,a4,a1+a3+a4,a1+a2+a3,a1+a2+a3)
Equivalently, given the information bits a1a2a3a4, the corresponding check bits may be calculated according to
                    (                              a            1                    ,                      a            2                    ,                      a            3                    ,                      a            4                          )                    ︸                  information          ⁢                                          ⁢          bits                      ⁢          (                                    111                                                011                                                101                                                110                              )        =            (                                    a            1                    +                      a            3                    +                      a            4                          ,                              a            1                    +                      a            2                    +                      a            4                          ,                              a            1                    +                      a            2                    +                      a            3                              )              ︸              check        ⁢                                  ⁢        bits            
It is to be noted here that the parity check matrix H from equation (2) has the same number of ones in each of its three rows. This feature is desirable for an efficient hardware implementation of the coding scheme. This is due to the fact that the calculation of each of the n−k check bits requires the same number of XOR operations (i. e. has the same logical depth). Another desirable property is that H is “sparsely occupied”. A binary matrix is called “sparsely occupied” when it contains relatively few ones. Furthermore, the decoding can be described as follows, wherein x and y represent two binary vectors. The Hamming distance d(x, y) between x and y is the number of coordinates in which x and x differ. The Hamming weight w(x) of x is the number of coordinates of x that are not zero.
Obviously therefore w(x)=d(x,0) and d(x,y)=w(x−y). For example, if x=(0,1,0,0,0,1), then w(x)=2.
Definition. Let C be a code. The number
  d  =            min                        u          ,                      v            ∈            C                                    u          ≠          v                      ⁢          d      ⁡              (                  u          ,          v                )            is called minimum distance of C.
Lemma 1. The minimum distance of a linear code C is the smallest Hamming weight from all non-zero code words. This results in
  d  =            min                        0          ≠          c                ∈        C              ⁢          w      ⁡              (        c        )            
Theorem 1. If H is the parity check matrix of a linear code, then the code has the minimum distance d when, and only when, all d−1 columns of H are linearly independent and some d columns are linearly dependent. In other words, the minimum distance d equals the smallest number of columns of H summing up to 0.
At this point, the above example is continued: consider the parity check matrix H from equation (2). Any three columns of H are linearly independent, whereas four columns have to be linearly dependent. In this way, the associated linear code has a minimum distance of d=4.
Theorem 2. A linear code with even minimum distance d can simultaneously correct (d−2)/2 errors and detect d/2 errors.
Assume that a message aεF2k is coded into a code word cεF2n, which is then transmitted via a disturbed channel (or is stored in a memory). The vector yεF2n is received. If, during transmission (or storage), there are less than └(d−1)/2┘ errors, then the correct code word c may be reconstructed from y. At this point, the so-called syndrome of y becomes helpful.
Definition. Let H be the parity check matrix of a linear (n,k) code C. Then the column vector S(y)=HyT of the length n−k is called syndrome of yεF2n.
By the definition of the parity check matrix H (cf. equation (1)), yεF2n is a code word when, and only when, S(y) is the zero vector.
Theorem 3. For a binary code, the syndrome equals the sum of the columns of H in which the errors have occurred.
Thereby S(y) is called syndrome, because it gives the symptoms of errors.
Especially the single error correction will be discussed. In this case, the above theorem assumes a simple form:
Theorem 4. A single bit error occurs when, and only when, the syndrome equals a column of H. The position of this column corresponds to the error position.
This is again demonstrated by means of the continued example: again consider the linear (7,4) code defined by the parity check matrix
  H  =      (                            1011                          100                                      1101                          010                                      1110                          001                      )  
Assume that the vector y=(1,0,1,0,0,0,1) is received. The syndrome is calculated as follows:
      S    ⁡          (      y      )        =            Hy      T        =          (                                    0                                                1                                                1                              )      
The syndrome S(y) matches the second column vector of H and thus indicates that the second coordinate of y is defective. This allows to identify the correct code word c=(1,1,1,0,0,0,1) and the information bits to 1110.
The most direct and also most hardware-saving implementation of a linear code is realized by means of the so-called check matrix (parity check matrix) of the code. The matrix entries are bits. For a single bit error correction, it is possible to structure the matrix so that each column of the matrix contains, for example, exactly three ones and otherwise zeroes. One exception are only the last columns of the check matrix, which preferably form a unit matrix and only need to contain one 1, respectively.
However, the conventional approaches of the single bit error correction have the disadvantage that, with the help of the syndrome and the check matrix, there has to be a comparison as to which column of the check matrix matches the syndrome in order to be able to determine the position of the occurred bit error therefrom.