CMOS technology has evolved at such a brisk pace that the computer market has rapidly opened to a wide range of consumers. Today multi-media computers require at least 32 MB, which increases the relative cost of the memories within the computer. In the near future, 64 MB or 128 MB computers will become commonplace, which suggests a potentially strong demand for 256 Mb DRAMs (Dynamic Random Access Memory) and beyond. Despite the huge size of the memory arrays and the lithographic difficulties that ensue, it is even more important then ever to increase the yield of the memory devices and memory system containing a plurality of the DRAMs. Process engineers constantly struggle to reduce, and ultimately eliminate defects to improve yields. Faults which will inevitably remain in the chips and system, are generally overcome using special circuit designs, most notably, redundancy replacement configurations.
An on-chip redundancy architecture which is commonly used for low density DRAMs is shown in FIG. 1(a). This configuration assigns a limited number of redundancy rows and redundancy columns to each block, which are then used within each corresponding sub-block. This intra-block replacement configuration, however, increases the redundancy area overhead as the number of sub-arrays increases in highdensity DRAMs, since each block must be provided with at least one or, preferably, two redundancy elements (REs). The effectiveness of the redundancy elements is poor because of its inherent inflexibility, further reducing the chip yield when faults are clustered in a certain sub-block. This intra-block replacement is described in the article by T. Kirihata et al., entitled "A 14 ns 4 Mb DRAM with 300 mW Active Power", published in the IEEE Journal of Solid State Circuits, vol. 27, pp. 1222-1228, Sep. 1992.
Another on-chip redundancy architecture, which was primarily developed for high-density DRAMs is shown in FIG. 1(b), in which all the redundancy elements are clustered in one redundancy array. These elements are used for repairing faults in several of the sub-arrays. This added flexibility allows sharing redundancy elements and control circuits among several sub-blocks, thereby significantly increasing the effectiveness of the redundancies present therein. Because of this effectiveness, it requires less redundancy elements and control circuits, while allowing good reparability especially for wordlines (WL), bitlines (BL), and column-select lines (CSL) faults. The effectiveness or flexibility of the aforementioned repair configuration is, however, limited to its use within a chip--a distinct drawback when repairing a substantial number of retention faults. More details on this flexible redundancy replacement may be found in an article by T. Kirihata et al., entitled "A Fault-Tolerant Design for 256 Mb DRAM", published in the IEEE Journal of Solid State Circuits, vol. 31, pp. 558-566, April 1996.
Semiconductor memory systems equipped with a spare or redundant chip make it possible to extend repairability to the system level. Herein, a defective chip containing a defective element can be repaired with redundancy elements located in the redundancy chip, as shown in FIG. 1(c). The effectiveness of the memory system is drastically improved over on-chip redundancy because it allows using the chip containing the defect. However, the added requirement of an added redundancy chip containing complex redundancy replacement and control circuitry makes this approach costly. It is questionable whether or not the added cost can be paid off by an improvement in yield at the system level. On-chip redundancy is a lesser expensive approach and, therefore, more common to existing memory systems. A sample configuration of a system level redundancy that includes a spare or relief chip is described in Japanese Patent Document No. JP-A-1-269229 and in U.S. Pat. No. 5,691,952. The latter configures the chip in a plurality of `mats`, enabling to isolate the power supply from a faulty `mat`. This approach is efficient for repairing a block fault (i.e., a large clustered fault), since designing a block redundancy in each chip (i.e., on-chip block redundancy) is generally inefficient, as was discussed in the aforementioned U.S. Pat. No. 5,691,952. A typical on-chip block redundancy is described in an article by G. Kistukawa et al., "256 Mb DRAM circuit technologies for file applications", published in the IEEE Journal of Solid State Circuits, vol. 28, pp.1105-1113, November 1993.
Sasaki, in U.S. Pat. No. 5,469,390 describes a semiconductor system wherein defects in a given chip are cured with repair elements located in some other chip. This approach is effective for relieving faults at system level, because a defective unrepairable element can be replaced with an unused redundancy element present elsewhere, when all existing repair circuits available on the chip have already been exhausted, as it is shown in FIG. 1(d). This approach, however, requires more complex redundancy circuitry to enable this flexibility, thereby substantially increasing its cost. Moreover, this technique is not applicable to certain types of conventional memory modules such as SIMM (Single In-line Memory Module) or DIMM (Dual In-Line Memory Module) because all the chips within the module are generally accessed simultaneously. (Note that an I/O terminal must be used for the chip to be activated). Although an inactive chip located in a separate module can theoretically be used for system redundancy, this approach is too complex and expensive for its universal use at the system level.
Regardless whether on-chip redundancies or system level redundancies are used, the method of redundancy replacement shown in FIG. 1(e) uses redundancy elements consisting of a plurality of redundancy data cells, redundancy circuitry consisting of a plurality of redundancy address storage elements for storing an address to determine a defective element, and a plurality of redundancy match detection decoders organized as separate and independent blocks. Furthermore, within the redundancy circuitry, the redundancy address storage elements and the redundancy match detection decoders are also arranged as separate and independent blocks. This makes it difficult to effectively share the redundancy match detection decoders if the number of redundancy address storage elements is large. If the redundancy address elements are organized in a two dimensional array, it is more effective to share the redundancy match detection decoders than in a one dimensional arrangement. However, conventional redundancy address storage elements use fuses, which are difficult to arrange as an array. Even if the redundancy address storage elements are arranged in a matrix formation with non-volatile memories, such as an EPROM or EEPROM, no effective method is known for sharing the redundancy match detection decoders. In conclusion, no prior art has been identified for providing an effective method of using redundancy elements consisting of a plurality of redundant cells, redundancy circuitry consisting of a plurality of redundancy address storage elements and a plurality of redundancy match detection decoders, while improving the chip and/or the memory system yield.
Referring now to FIG. 1(f), there is shown a defect management engine (DME), as described in U.S. patent application Ser. No. 09/243,645, servicing a plurality of memories 110 in a system 100. These memories 110 include any and all types of on-chip or off-chip memory devices. The DME enables redundancies included therein to be advantageously used for repairing faults (labeled X) found in a domain within the memories 110. DME 130 consists of a memory array 132, wordline (WL) drivers 134, sense amplifiers 136, and DME logic 138. Unlike conventional redundancy arrays which are provided with only redundancy data cells (see FIG. 1(b)), the DME memory array 132 includes redundancy data cells 132-0 as well as redundancy address cells 132-1. Redundancy data cells 132-0 are used for replacing defective cells within the memories 110. The address cells 132-1, on the other hand, store the addresses which identify the location of the defective cells within the memories 110.
For convenience sake, all memories 110 are divided into a plurality of domains, the size of which are not necessarily the same. WLs in DME 130 are assigned to a given domain within the various memories 110 for handling defective cells within that domain. By assigning a different number of WLs to a corresponding domain, the repairability of the domains becomes variable, thus setting the stage for a variable number redundancy replacement per domain. A more detailed description of the variable size redundancy replacement technique is described in U.S. Pat. No. 5,831,914. WL drivers 134 activate the redundancy data cells 132-0 as well as the redundancy address cells 132-1 when a corresponding domain within the memory 110 is selected. The domain size can also be variable, allowing for a variable domain redundancy replacement. Further description of a variable domain redundancy replacement may be found in U.S. Pat. No. 5,831,914. Sense amplifiers 136 amplify simultaneously the data bits from the redundancy data cells 132-0, the redundancy addresses (ADRD) from the redundancy address cells 132-1. (Note: the sense amplifiers may not be necessary in the event where the redundancy address or data cells have already sufficient cell gain, e.g., in an SRAM or EPROM). The DME logic 138 activates a self-contained redundancy match detector which compares the address inputs ADR (coupled to the DME logic 138), with a redundancy address ADRD read from the redundancy address cell 132-1. As a result, signals are generated showing a `MATCH` (ADR=ADRD) or, conversely, a `NO-MATCH` (ADR.noteq.ADRD) condition. MATCH initiates a self-contained redundancy replacement by opening a first switch SW1 and transferring the redundancy data bit from the redundancy data cells 132-0 to the data-line DL.sub.R. Concurrently, a second switch SW2 couples DLR to DQ. The NO-MATCH condition maintains SW1 in its normal closed position, and switch SW2 couples DL.sub.N to DQ (which is linked to memories 110), in order to access the memory. A more detailed description can be found in U.S. patent application Ser. No. 08/895,061.
In conclusion, the previous DME 130 fully integrates a plurality of redundancy circuits consisting of self-contained domain selectors by WL drivers 134, self-contained redundancy match detectors by DME logic 138, and self-contained redundancy replacements by redundancy data cells (132-0)and redundancy address cells (132-1), all within a single memory chip, enabling in the process a variable number of redundancy replacements per domain; a variable domain redundancy replacement and a variable number of bit replacement.
This fully integrated DME is particularly advantageous or a low cost memory system. However, the fully integrated DME is not preferable for a high performance memory system, since the DQ switching between the DME and a defective memory device is difficult at high frequencies. An on-chip DME may overcome this problem, however providing a DME in each chip is expensive.