Soft decoding of signals associated with a quantized channel suffers from well known fidelity and speed problems. Attempts have been made to improve the fidelity of soft decoding. These attempts have, in some cases, frustrated attempts to improve the speed at which signals are decoded. Similarly, attempts to improve decoding speed have frustrated improving fidelity.
Soft decoding may depend on a log likelihood ratio (LLR). An LLR is a measure of the reliability of a decoded value. The value may be decoded from a signal received from a quantized channel. One example of a quantized channel is a memory (e.g., NAND flash memory). In one NAND flash memory embodiment, an LLR>0 may indicate that a bit from the memory is likely a 0 while an LLR<0 may indicate that the bit is likely a 1. The validity of the indication may be checked using different validity checks including, for example, parity checks and error correction code processing. An LLR may be described according to:
                              LLR          =                      log            ⁡                          (                                                P                  (                                      v                    =                                          1                      |                                              S                        v                                                                                                              P                  (                                      v                    =                                          0                      |                                              S                        v                                                                                                        )                                                            LLR          =                      ⁢            P      ⁡              (                  v          =                      0            |                          S              v                                      )                    P      ⁡              (                  v          =                      1            |                          S              v                                      )            
where P is the probability, v is the value, and Sv is the signal associated with the value.
Thus, an LLR is the log of: the probability that the value is a zero given current constraints and observations (e.g., the signal) divided by the probability that the value is one given current constraints and observations (e.g., the signal). The likelihood that the value decoded from the signal is correct varies directly with the magnitude of the LLR, meaning that the larger the LLR magnitude, the more likely the value decoded from the signal is correct.
Threshold voltage distributions for a NAND memory may be known and characterized. For example, the mean and variance for a Gaussian distribution may be known. In this case, if the reference voltages for the memory are also known, then LLRs can be computed that facilitate accurately and quickly decoding the quantized signals. However, in practice, the mean and variance may not be known, and the distribution may not be perfectly Gaussian. Therefore, it may be difficult to calculate LLRs making it difficult, in turn, to accurately and quickly decode quantized signals when soft error correction code decoder is used.
Conventionally, LLRs may have been chosen for a decoder in an iterative manner using trial and error. Typically, each LLR calculation started from scratch with hopes that a convergence would be reached and that a value could be decoded. These conventional approaches may have consumed undesirable amounts of time and power while providing lower than desired fidelity.