Efforts to increase the capacity of NAND-type flash memory (hereinafter called flash memory) have progressed in recent years, and flash memory has come to be used in place of a hard disk drive (hereinafter HDD) as a secondary storage system. For example, whereas the response time of the HDD is roughly 10 milliseconds, the response time of the flash memory, for example, is roughly between 100 microseconds and 2 milliseconds, making it extremely shorter than that of the HDD. The widespread use of flash memory as a storage medium in computers is due to the short response time.
However, realizing increased capacity in flash memory generally involves sacrificing reliability. Two problems exist with respect to reliability.
One problem relates to data retention characteristics. In a typical flash memory, data is stored, and when this data is read, there are between 40- and 50-bit errors per 512 bytes of data. Therefore, when storing data in a flash memory, error correcting codes (will be called ECC hereinafter) must be added for each 512 byte data so as to be able to correct the 40- to 50-bit errors with respect to 512-bytes of data.
One more problem is program-erase tolerance. Writing (called programming) and deleting (called erasing) data to and from the flash memory damages the flash memory cell that stores the data. When data is repeatedly programmed and erased, the flash memory cell completely breaks down and data can no longer be recorded in this cell. This signifies that the bit error probability of the flash memory rises when data is programmed/erased to/from the flash memory.
As described up to this point, in order to use flash memory as a secondary storage system in a computer, data must be encoded using ECC. There are numerous ECCs, but the ECC known for having high correction capabilities is a Low Density Parity Check Codes (called the LDPC codes hereinafter). The LDPC codes are an ECC that applies the Bayes theorem. The Bayes theorem will be explained here to explain the error correction principle of the LDPC codes.
It is supposed that the probability of an event A occurring is P(A). It is supposed that the conditional probability of an event B occurring after the event A has occurred is P(B|A). According to the definition of conditional probability, P(B|A) is given as math (1).
                              P          ⁡                      (                          B              ❘              A                        )                          =                              P            ⁡                          (                              A                ⋂                B                            )                                            P            ⁡                          (              A              )                                                          [                  Math          .                                          ⁢          1                ]            Alternatively, the conditional probability P(A|B) of the event A occurring after the event B has occurred is given as math (2).
                              P          ⁡                      (                          A              ❘              B                        )                          =                              P            ⁡                          (                              A                ⋂                B                            )                                            P            ⁡                          (              B              )                                                          [                  Math          .                                          ⁢          2                ]            The Bayes theorem (3) is obtained from math (1) and (2).
                              P          ⁡                      (                          A              ❘              B                        )                          =                                            P              ⁡                              (                                  B                  ❘                  A                                )                                      ⁢                          P              ⁡                              (                A                )                                                          P            ⁡                          (              B              )                                                          [                  Math          .                                          ⁢          3                ]            
Here it is supposed that the event A is an event which programs data x to a flash memory cell. The x takes a value of either 0 or 1. It is supposed that the event B is an event in which, when the data is read from the flash memory cell, the data is y. The y takes a value of either 0 or 1. It is supposed here that the bit error probability of the relevant flash memory cell is p. It is supposed that p is ascertained via testing. In so doing, a conditional probability P(y|x) can be computed. For example, maths (4) and (5) below are obtained.P(y=0|x=0)=(1−p)P(x=0)  [Math.4]P(y=0|x=1)=pP(x=1)  [Math.5]Normally, since P(x=0) and P(x=1) is ½, the value of P(y|x) can be computed in concrete terms. This P(y|x) is called the prior probability. Meanwhile, P(x|y) signifies the probability that, although the data read from the flash memory cell is y, the data programmed to this memory cell in the past was x. This P(x|y) is called the posterior probability. The posterior probability P(x|y) can be computed in accordance with the Bayes theorem (3). The LDPC codes is the ECC for computing the posterior probability P(x|y) using the Bayes theorem (3), and for estimating that x, which maximizes the posterior probability P(x|y), is the correct data.
However, in actuality, computing the posterior probability is not easy. A log-domain Sum-Product Algorithm, which is a typical LDPC codes decoding method, will be explained here. First, it is supposed that the codeword that was programmed to the flash memory adheres to math (6),c=(c1, . . . ,cN)  [Math.6]
and that the codeword that was read adheres to math (7).r=(r1, . . . ,rN)  [Math.7]
The log-domain Sum-Product Algorithm does not compute the posterior probability, but rather computes a log likelihood ratio function. When it is supposed that X is a random variable that takes a value of either 0 or 1, the log likelihood ratio function of the random variable X is defined as:
                              λ          ⁡                      (            X            )                          ⁢                  =          △                ⁢                  log          ⁢                                    P              ⁡                              (                                  X                  =                  0                                )                                                    P              ⁡                              (                                  X                  =                  1                                )                                                                        [                  Math          .                                          ⁢          8                ]            
In so doing, the log likelihood ratio function of a prior probability P(rn|cn) of the nth bit of the codeword can be computed from maths (4) and (5).
                              λ          n                =                  {                                                                      log                  ⁢                                                            1                      -                      p                                        p                                                                                                (                                                            r                      n                                        =                    0                                    )                                                                                                      log                  ⁢                                      p                                          1                      -                      p                                                                                                                    (                                                            r                      n                                        =                    1                                    )                                                                                        [                  Math          .                                          ⁢          9                ]            Next, a check bit zmn for detecting an error is defined.
                              z          mn                ⁢                  =          △                ⁢                              ∑                          j              ∈                              A                ⁡                                  (                                      m                    ,                    n                                    )                                                              ⁢                                          ⁢                      c            j                                              [                  Math          .                                          ⁢          10                ]            The sum of math (10) expresses an exclusive OR, and A(m,n) is a set of indices of a parity check matrix H.A(m,n){j|Hmj=1∩j≠n}  [Math.11]Furthermore, math (12) defines B(n,m) as a set of indices of a parity check matrix.B(n,m){i|Hin=1∩i≠m}  [Math.12]Math (13) and (14) hold when it is supposed that the posterior log likelihood ratio function corresponding to the check bit zmn is alphamn, and the posterior log likelihood ratio function corresponding to the nth bit cn of the codeword is betamn.
                              α          mn                =                              (                                          ∏                                  j                  ∈                                      A                    ⁡                                          (                                              m                        ,                        n                                            )                                                                                  ⁢                                                          ⁢                              sign                ⁡                                  (                                                            λ                      j                                        +                                          β                      mj                                                        )                                                      )                    ⁢                      f            (                                          ∑                                  j                  ∈                                      A                    ⁡                                          (                                              m                        ,                        n                                            )                                                                                  ⁢                                                          ⁢                              f                ⁡                                  (                                                                                                        λ                        j                                            +                                              β                        mj                                                                                                  )                                                      )                                              [                  Math          .                                          ⁢          13                ]                                          β          mn                =                              ∑                          i              ∈                              B                ⁡                                  (                                      n                    ,                    m                                    )                                                              ⁢                                          ⁢                      α            in                                              [                  Math          .                                          ⁢          14                ]            A function f is called Gallager's f function.
                                                                        f                ⁡                                  (                  x                  )                                            ⁢                              =                △                            ⁢                              log                ⁢                                                                            ⅇ                      x                                        +                    1                                                                              ⅇ                      x                                        -                    1                                                                                                          (                              x                >                0                            )                                                          [                  Math          .                                          ⁢          15                ]            In a case where betamn is clear from math (13), alphamn can be computed. However, as is evident from math (14), betamn is computed from alphamn. It is not possible to precisely compute the posterior log likelihood ratio function like this. Consequently, an approximate computation is performed. First, math (13) is computed using betamn=0. In so doing, a provisional value of alphamn is computed. Substituting this provisional value into math (14) makes it possible to compute betamn. Plugging the betamn obtained here into math (13) makes it possible to compute a new alphamn. By repeating this, alphamn and betamn can be expected to converge into a certain value. Then, as is clear from definitional math (8) of the log likelihood ratio function, each bit of the codeword can be estimated using the following math.
                                          c            ^                    n                =                  {                                                    0                                                              (                                      sign                    (                                                                  λ                        n                                            +                                                                        ∑                                                      i                            ∈                                                          B                              ⁡                                                              (                                n                                )                                                                                                                                    ⁢                                                                                                  ⁢                                                  α                                                      i                            ⁢                            n                                                                                                                )                                                                                                      1                                                              (                                      sign                    (                                                                  λ                        n                                            +                                                                        ∑                                                      i                            ∈                                                          B                              ⁡                                                              (                                n                                )                                                                                                                                    ⁢                                                                                                  ⁢                                                  α                                                      i                            ⁢                            n                                                                                                                )                                                                                                          [                  Math          .                                          ⁢          16                ]            Above computation stated as following algorithmic form.    Step 0: Set the maximum value of the number of iterative computations. Set the number of iterative computations 1 to 0. Furthermore, set all betamn to 0.    Step 1: Compute math (13) for all m.    Step 2: Compute math (14) for all n.    Step 3: Estimate the value of each bit of the codeword in accordance with math (16). This is referred to as the temporary estimated word.    Step 4: Calculate product of the temporary estimated word and the parity check matrix.    Step 5: In a case where the computation result of Step 4 equals to 0, assume that the correction is complete. When this is not the case, proceed to Step 6.    Step 6: Increment the number of repetitions 1 by 1.    Step 7: When the number of iterations 1 exceeds the maximum value, assume that correction is not possible and end the processing. When this is not the case, go to Step 1.
The LDPC codes estimates the correct data by computing a posterior probability (strictly speaking, the log likelihood ratio function of a posterior probability) based on a prior probability (strictly speaking, a log likelihood ratio function of a prior probability) like this. The prior probability can be computed when the bit error probability p of the flash memory cell is measured by experiment. However, as described above, the bit error probability p of the flash memory cell increases as data programming is repeated. When the bit error probability of the flash memory cell remains fixed, a deviation with the actual flash memory cell bit error probability occurs. As is clear from looking at the log-domain Sum-Product Algorithms of maths (13) and (16), the prior probability (log likelihood ratio function of the prior probability) plays an important role in correction. Therefore, when the value of the prior probability (=bit error probability p) deviates from the actual value, the LDPC codes becomes incapable of correcting an error.
The technologies of Patent Literatures 1 and 2 are inventions devised for solving this problem. In Patent Literature 1, an LDPC codes error correcting circuit monitors the analog output voltage of the flash memory and compensates the bit error probability p of the flash memory cell. For example, it is supposed that the design is such that data is 1 in a case where the analog output voltage of the flash memory cell is 1 V. When data is repeatedly programmed into the flash memory cell, the degradation of the flash memory cell advances and the analog output voltage falls below 1 V. When it is supposed that the analog output voltage is z V, the relationship in math (17) exists between the bit error probability p of the flash memory cell and the output voltage z.z=1−p  [Math.11]
The correction capabilities of the LDPC codes can be maintained by correcting the bit error probability p of the flash memory cell and preventing the above-mentioned deviation based on this relationship.