1. Field of the Disclosure
The disclosure relates to a method for recovery of lost data and for correction of corrupted data which are transmitted from a transmitter device to a receiver device. First, coding of said data is performed by means of an encoder connected to the transmitter device. Subsequently, said data is transmitted from the transmitter device to the receiver device via a transmission system, and said data is decoded preferably through application of a Low Density Parity Check method by means of a decoder connected to the receiver device, wherein lost data and for correction of corrupted data is restored during decoding.
2. Discussion of the Background Art
The transmitted data can be audio or video streams, for instance. From a transmitter device which makes these data available, the data is transmitted e.g. to a mobile receiver device. The mobile receiver device can be, for instance, a mobile phone, a PDA or another mobile end device. Alternatively, data can also be transmitted from a transmitter device to a stationary receiver device.
Examples of standards used for the transmission of data to mobile end devices include DVB-H, MBMS and, to be expected in the near future, DVB-SH. The concept that is proposed works also in point-to-point communications.
In order to guarantee a good transmission quality, it is required to verify the correct transmission of data or data packets to the receiver device. Various methods exist for recovery of lost data and for correction of corrupted data which were not correctly transmitted to the receiver device.
A known method for recovery of lost data and for correction of corrupted data is the Low Density Parity Check (LDPC) method or the Low Density Parity Check Code. This method is applied on a so-called erasure channel. Apart from an application by coding on the level of the physical layer, further applications exist in the field of a Packet Erasure Channel (PEC).
FIG. 1 schematically illustrates an example of the recovery of lost data and for correction of corrupted data according to the state of the art. FIG. 1 depicts a case where it is desired to transmit a number k of information packets from a transmitter device (left-hand side) to a receiver device (right-hand side). Using a packet-level encoder on the transmitter side, the k information packets and the m parity packets will be assembled into n=m+k codeword packets. On the level of the physical layer, the packets are secured by an error correction code (e.g. a turbo code) and an error detection code (e.g. by a Cyclic Redundancy Check, CRC) so that corrupted packets can be removed. On the levels above the physical layer, packets are either correctly received or are considered lost in that they are erased because the CRC has detected a corrupted packet in the physical layer. Thus, from the layers thereabove, the transmission channel is seen as a so-called erasure channel, the packets representing the transmission units. However, in practice it happens that the CRC fails, leading to errors in packets that are marked as correct. This CRC failures cause a degradation in performance.
On the receiver side, the received codeword packets are decoded by the packet-level decoder so that the lost packets are recovered and the corrupted data is corrected.
The recovery of lost data and for correction of corrupted data can be realized by a redundancy of the data. The encoding process handled by the packet-level encoder is usually performed in a bit-wise (or byte-wise) manner using an encoder with a Generic Binary Linear Block Code. The decoding will subsequently be performed by solving the equation system which is defined by the parity-check matrix H of the code.
In order to recover lost packets, maximum likelihood decoding (ML) can be used to recover the erasures. The problem that each decoder has to face after receiving the symbols from the erasure channel can be described by solving the following equation:H K·x K=HK·xK.  (1)
Here, x K(xK) denotes the set of erased (correctly received) symbols and H K(HK) the sub matrix composed of the corresponding columns of the parity check matrix. A proper way to solve linear equations is to apply Gaussian elimination. For large block lengths however, this would require lot of operations (e.g. a lot of additions for the specific case of GF(2)) and hence would be too time consuming. Therefore, some smart Gaussian elimination techniques were proposed in DE 10 2009 017 540, that take advantage of the sparseness of the parity check matrix. The goal is to apply time consuming brute-force Gaussian elimination only on a small sub matrix Pl. To this end H K is put into a lower triangular form just by row and column permutations. However, this step might be blocked by some columns and the corresponding symbols are named reference symbols or pivots. Next, these columns are moved from H K to a separate matrix P and the triangularization process can continue. As final outcome all erased symbols can be represented as a combination of reference symbols and hence it is sufficient to apply Gaussian elimination only on parts of P. If the number of pivots is low, P is small and Gaussian elimination is quick. Gaussian elimination has a complexity O(n3) where n stands for the block length here. If for example, the block length of the code is doubled a decoding speed can be expected that is eight times lower (at least for large block lengths).
The smart Gaussian elimination process is described in more detail now. The matrix H K can be divided into different sub-matrices as illustrated in FIG. 2. The main parts are A that represents the part in triangular form, B that is the part that has to be put in triangular form, D that is a sparse matrix in the beginning, but will be zeroed out in the end, Z that contains only zeroes and P that is made up of the columns that correspond to the reference symbols. In the end P can be divided into a sparse higher and a dense lower part. Now the following algorithm can be applied:                1. Search for any degree on row in B. The degree of a row (column) is defined by the number of one entries it contains. At the start B is equal to H K. In case no degree one row can be found continue with 4.        2. Apply only column and row permutations to move the single one entry in the degree one row to the top left of B.        3. Increase i and k by one. This way A becomes bigger in each step while the dimension of B shrinks. At the beginning A does not exist.        4. In case no degree one row could be found (cf. step 1) move a defined column of B and Z to P. Thus, also j is decreased by one. At the very beginning P does not exist. Which column of B and Z is chosen to be moved to P is defined by the pivoting strategy.        5. Repeat steps 1 to 4 until dim(P)≠0×l, where l is an arbitrary positive integer number.        6. Zero out D. Now the lower part of P named Pl becomes dense.        7. Perform brute-force Gaussian elimination on Pl. Brute-force Gaussian elimination consists of two steps (for the notation cf. FIG. 3):                    a. A forward elimination step to put the matrix Pl in upper triangular form:             We consider the set of equations Pl·x′ K=s*. Here, x′ K are only the unknown pivots, thus a subset of the original unknowns x K. The vector s* is the corresponding known term which is made up of a multiplication of the corresponding part of the parity check matrix with the known code symbols H*K·x′K. Here, the known code symbols x′K are a permutation of those in equation (1).             Then we perform (weighted) row additions to put Pl in upper triangular form.             After the forward elimination step, we obtain a modified set of equations H′ K·x′ K=s′. Here, the matrix H′ K corresponds to Pl in upper triangular form. s′ is the known term, also called syndrome. It is made up of a multiplication of the known code symbols and the corresponding part of the parity check matrix H′K·x′K.            b. A back substitution step, to get the pivots x′ K.                        8. If Gaussian elimination was successful, do another back substitution to recover the remaining unknowns. Otherwise, decoding failed.        
In case of undetected errors in the codeword symbols, the whole codeword will be corrupted due to the fact that the erroneous codeword symbol (or packet) will be involved in several additions. This leads to an error floor. This is illustrated in FIG. 6a, upper most curve, BEEC (no error correction).
A drawback of the described smart Gaussian elimination process is that it is not able to detect errors, nor correct errors in the codeword, leading to a loss in performance.
It is an object of the present disclosure to provide for a method for recovery of lost data and for correction of corrupted data which makes it possible to detect multiple errors and to correct single codeword errors.