Currently, erroneous data (i.e., data known to be bad, also referred to herein as “poisoned” data), may be stored using existing storage bits in error correction code (ECC)-protected memories, such as a dynamic random access memory (DRAM). In one scheme, extra storage bits may be used to save poisoned data indicators. However, extra storage may be required to implement this scheme, which may be expensive and non-standard. In another scheme, it may be possible to mark data blocks (e.g., cache lines, pages, and the like), as “poison” using existing ECC, via a special error (i.e., poison) indicator. However, if a memory location is already faulty, encoding the error indicator into the ECC at that location may alter the ECC state of the data block, whereby it may convert a corrected error into an uncorrected error or an undetected error, or it may convert an uncorrected error into an undetected error.
Furthermore, ECC typically may protect 2n-1 symbols, although data is typically grouped in 2m symbols, where n>m. One example is when n=m+1. This may leave 2n-2m-1 symbols unused by the data. Some of these symbols may be needed for check symbols, but there may be more symbols available for protection than are needed. Thus, the ECC code may be shortened by forcing the unused symbols to zero. For example, 128 data bits may be divided into sixteen 8-bit symbols, and may require 4 check symbols. An ECC code capable of protecting 31 symbols may be used, where 16 symbols are data, 4 symbols are check symbols, and the remaining 11 symbols are unused, thus creating a (20,16) code from the available (37,31) code. However, a static value of a check symbol may not be used as a poison indicator, as all possible values of the check symbol are used with valid data.