Flash memory has become increasingly popular in recent years. Flash memory is used in numerous applications including mobile phones, digital cameras, music players, and many other applications. A major emerging application is the use of flash memory as Solid State Disc (SSD). Such memories may be implemented using high density Multi-Level Cell (MLC) memories with redundancy such as advanced Error Correction Coding (ECC) schemes. For example, advanced ECC schemes include iterative coding schemes based on Low-Density Parity-Check (LDPC) or Turbo codes.
Error correction codes are commonly used in memories in order to provide data reliability and integrity, by dealing with errors that are introduced by the physical medium during its programming or reading or during storage. An error correction code is a set of codewords that satisfy a given set of constraints. One commonly used class of error correction codes is the class of binary linear block codes, in which the code is defined through a set of parity-check constraints on the codeword bits. In other words, a binary linear block code is defined by a set of linear equations over the two-element Galois field GF(2) that a valid codeword should satisfy. The set of linear equations can be conveniently described via a parity-check matrix H of M rows, such that each row of the matrix defines one parity-check constraint and a word C constitutes a valid codeword if and only if H·C=0 (over GF(2)). The vector S=H·C is commonly known as the syndrome vector associated with the word C. This syndrome may be referred to as the “error correction” syndrome to distinguish it from a different syndrome, the “cyclic redundancy check (CRC)” or “checksum” syndrome. Each element of the syndrome vector is associated with one of the parity check equations, and the value of the element is 0 for an equation that is satisfied by C and 1 for an equation that is not satisfied by C. The elements of the syndrome vector also are called “bits” of the syndrome vector herein. The syndrome weight (Ws) is the number of unsatisfied equations represented by the syndrome vector S. So, for a word to be a valid codeword the syndrome vector associated with the word must be all zeros and its syndrome weight must be 0.
Error correction codes may be based on iterative coding schemes, such as LDPC and Turbo codes. In iterative coding schemes, decoding is performed using an iterative algorithm that iteratively updates its estimates of the codeword bits until the algorithm converges to a valid codeword. The iteratively updated estimates can be either “hard” estimates (1 vs. 0) or “soft” estimates, which are composed of an estimate of the bit's value (1 or 0), together with some reliability measure of the estimate indicating the probability that the estimated value is correct. A commonly used soft estimate is the Log Likelihood Ratio (LLR), the ratio of the probability of the bit being 0 to the probability of the bit being 1. A positive LLR means that the bit is estimated to be more likely to be 0 than 1. A negative LLR means that the bit is estimated to be more likely to be 1 than 0. The absolute value of the LLR is an indication of the certainty of the estimate. An estimate of a bit may “flip”, meaning that the value of the bit estimate changes: for example, a hard estimate changes from 0 to 1 or from 1 to 0, or the sign of a LLR changes from positive to negative or from negative to positive. (Similarly, “flipping” a bit of a syndrome vector indicates changing the bit from 1 to 0 or from 0 to 1.) The decoder is initialized with initial a-priori (possibly “soft”) estimates of the bits. These estimates are then processed and updated iteratively. The decoding can terminate after a fixed number of iterations. Alternatively, a convergence detection mechanism can terminate the decoding once all the parity check constraints are satisfied by the current bit estimates.
Another option for early decoding termination is by a “divergence” detection mechanism, which detects that the probability for decoder convergence is low and hence it is more efficient to terminate the current decoding attempt and retry decoding after updating the decoder initialization values. One option for performing such divergence detection is based on the current number of unsatisfied parity-check constraints being too high. Another option for divergence detection is based on the evolution of the number of unsatisfied parity-checks during decoding. In case of such early termination, the decoding may be repeated with updated initialization values, after changing certain parameters, such as the memory reading thresholds or reading resolution, such that the probability of successful decoding convergence in the repeated attempt is increased.
After convergence of an iterative decoding process to a valid codeword, a checksum may be performed on the resulting codeword to determine whether the decoding process has converged on an incorrect codeword. For example, a codeword may encode data bits along with CRC parity bits corresponding to the data bits. However, performing CRC processing on a codeword that results from an iterative decoding process may introduce additional decoding delay.