Non-volatile memories (flash memories) as a single/standalone chip or integrated together with logic follow the trend of very large scale integration in the same way as logic chips, for example. To accompany smaller structure widths, the increasing complexity of systems and the increase in volumes of data to be stored in the case of flash memories result in appreciably greater memory requirements in order to be able to usefully control these applications. In contrast to “classic” EEPROM memories, flash memories are distinguished by a smaller cell area, so that flash memories are used for most applications (particularly including data memories). However, one drawback of flash memory is relatively low erasure granularity. By way of example, for most flash memories the smallest erasable unit is a sector (typically larger than 1 kB).
The increasing miniaturization means that it is possible to produce more and more memory cells in the same chip area, increasing interference influences, which can affect the reliability of the memory cells for operations such as programming, erasure and reading. Many of these undesirable effects generally arise only statistically and/or stochastically, however. For this reason, error-correction and error-recognition methods are used in order to ensure that the memory operates properly under these influences. In this case, the methods used are oriented to the quality scale of the respective application (for use in cars, for example, the highest quality scales apply) and the effect and scope of the interference influences.
For the flash memories considered in particular in this case, the general nature is not restricted in considering a shortened BCH code with additional overall parity which is designed for 1-bit and 2-bit error correction and 3-bit error recognition which has a minimum code spacing of 6 and a user data area of 64 bits. In order to take account of the overall parity, the BCH code is extended by the parity, as is known to a person skilled in the art and described in MacWilliams, F., and Sloan, A., “The Theory of Error-Correcting Codes”, Amsterdam, 1977, pp. 27 ff., for example. In this case, memories having at least approximately 16 user data bits are of interest in practice, since for smaller numbers of user data bits the number of check bits required is relatively large. The error correction for 1-bit and 2-bit errors and the error recognition for 3-bit errors by means of BCH codes with additional overall parity and a code spacing 6 are known to a person skilled in the art and described in U.S. Pat. No. 4,030,067, for example. So as also to take account of the overall parity, the H matrix of the BCH code which corrects 1-bit and 2-bit errors has a further check bit added to it which provides the overall parity. In U.S. Pat. No. 4,030,067, the H matrix of the BCH code in this case has a row added thereto which contains nothing but ones.
For a user data area of 64 bits, considered as an example, a total of 15 check bits are then obtained, 14 check bits being attributable to the original BCH code and the 15th bit expressing the overall parity. In that case, the overall code word has a length of 64+15=79 bits and is programmed into the memory. The low erasure depth for flash memories now means that it frequently arises that data of a particular type, e.g. the state of an odometer repeatedly, are stored at different times before an erasure operation takes place. These data are then repeatedly present in the memory with, in some cases, data which are already older and already no longer up to date. For the purpose of explicitly identifying older data which are no longer valid as irrelevant or invalid, what is known as an invalidation marker is used. In this context, this marker can be chosen such that in the present example all 79 bits of a data item which is to be marked as invalid, i.e. the user data bits and the check bits of the data item, are programmed or overwritten with 1. The resulting word marked as invalid therefore has a 1 in all of its bits.
One disadvantage in this case is the possibility of the altered code word coming into conflict with the error-recognition or error-correction code used and hampering or preventing error recognition or error correction.