1. Field of the Invention
This invention generally relates to integrated circuit devices, and, more specifically, relates to a fault-tolerant integrated circuit device comprising a wafer of dynamically configurable gate arrays. This device provides fault-tolerance with respect to manufacturing defects by mapping all defective gate arrays and defective portions of each gate array on the wafer. This mapping of defects occurs when the wafer is initially tested after fabrication, and the mapping information is used to program the wafer with the desired functions without using any of the defective portions of the wafer.
2. Description of the Related Art
Programmable logic devices are well-known in the electronics art, and have progressed from simple AND-OR arrays to very complex Field Programmable Gate Arrays (FPGAs), which have a large number of input/output (I/O) blocks, programable logic blocks and programmable routing resources to interconnect the logic blocks to each other and to the I/O blocks. Many uses for these FPGAs have been found, with most being used to implement a high number of combinatorial logic functions, which results in lower part count, lower power dissipation, higher speed and greater system flexibility than if discrete components were used.
In recent years FPGAs based on Random Access Memory (RAM) were introduced by several manufacturers, including XILINX. The basic configuration of the XILINX FPGA is described in U.S. Pat. No. 4,870,302 to Freeman, which is assigned to XILINX, and is incorporated herein by reference. In addition, the technical features of XILINX FPGAs are described in XILINX, The Programmable Gate Array Data Book, (1992). The XILINX RAM-based FPGA has multiple I/O blocks, logic blocks and routing resources. The routing resources are used to interconnect the logic blocks to each other and to the I/O blocks, and to connect the I/O blocks to the I/O pads of the FPGA. The programming of the FPGA is accomplished by loading Configuration Data into the Configuration Memory Array of the FPGA. Since the XILINX FPGA is RAM-based, when power is first applied to the FPGA it has not yet been configured. Once the Configuration Data has been loaded into the Configuration Memory Array, the FPGA is ready for operation. The process of designing a circuit for a XILINX FPGA is described in XILINX, User Guide and Tutorials, (1991).
For a brief history of advances in integrated circuit (IC) technology, see John L. Hennessy and David A. Patterson, Computer Architecture: A Quantitative Approach, 53-62, (1990). With each advancement in IC technology, performance has typically increased by one to two orders of magnitude. The driving force behind IC performance includes three basic factors: 1) gate propagation delay, or time it takes a transistor to turn on and off; 2) signal propagation delay, or time it takes a signal to propagate from the output of one gate to the input of another; and 3) level of integration or number of gates that are incorporated onto a working die.
The first two factors, gate and signal propagation delay, are determined by the minimum feature size of the processing technology. As the minimum feature size is reduced, the effects of both of these factors are reduced, resulting in an improvement in overall speed and performance. The third factor, integration density, is determined primarily by the clean room environment and the processing technology. As the level of integration increases, the circuit will become more powerful, since more logical resources can be fabricated on-chip. This higher level of integration eliminates the long propagation delays that occur between two chips, increasing system speed and performance.
The obvious solution to increasing the speed of the IC would be to increase the overall size of the die and decrease the size of each gate or processing element. However, as the die size increases, the number of devices on a semiconductor wafer decreases and the yield (percentage of acceptable working die on one wafer) drops exponentially. This drop is expected since the number of defects per given area remains constant, while the size of the die increases, which increases the likelihood that a defect will occur in the die. Yield is determined by two primary factors, the particle count in parts per billion (PPB) of the clean room, and the die size of the chip. If an error, due to dust or inconsistencies in the crystal lattice, occurs on a specific chip on the wafer, then the chip becomes useless. Since yield drops as chip size increases, chips are typically designed with only the minimum circuits required to make them work, and if any part of the chip is non-functioning then the entire chip is useless. As the chip size increases, the chip count on a wafer decreases. As the chip count on a wafer decreases, the probability of chip failure increases. Semiconductor devices with relatively small die size provide a much higher number of devices per wafer and have a much higher yield than for devices that have larger die sizes. Thus, practical limitations on die size directly conflict with the goal of increasing the complexity and functionality of FPGAs, which inherently pushes the die size for FPGAs to increase.
A powerful example of the yield decrease that occurs from increasing die size follows. On a six inch wafer, 23.2 chips that are one cm per side (which are smaller than the 80486 microprocessor developed by Intel) can be built with a typical yield of 3.6%. This yield means that two wafers at a cost of $550 each would be required to get one usable chip. If the die size of the chip were decreased, more chips could be fabricated on a single wafer, and the probability of chip failure would decrease.
Defects in the wafer would have a less dramatic impact if extra or spare circuitry were included on the die to allow bypassing the defect. The manufacturer of the wafer would then be able to determine defects on the die and replace nonfunctional circuitry with the spare circuitry. This technique makes the chip fault-tolerant, and has been implemented by manufacturers to some extent. For example, U.S. Pat. No. 4,937,475 to Rhodes et al. discloses an integrated circuit chip with multiple logic blocks interconnected by a series of horizontal and vertical conductors. The vertical conductors are electrically isolated from the crossing horizontal conductors, and have a laser diffusible region at each intersection such that a laser beam can cause the region to conduct thereby connecting the vertical conductor with the crossing horizontal conductor. In addition the laser beam is used to sever the conductors in certain places. In this manner the laser beam can custom-configure the chip as desired by making some connections and breaking others, which allows the defective portions of the chip to be bypassed.
The success of fault-tolerance through redundant or spare circuitry has been limited, however, because the granularity of the spare circuitry in current architectures is unworkable. For example, in the case of a microprocessor chip, to duplicate functional units (such as the Arithmetic Logic Unit (ALU), registers, etc.), the designer would have to increase the area of the die by at least twice. If nothing was wrong with the primary units, then half of the die would be wasted on unused spares. In addition, increasing the size of the die to accommodate spares causes the yield to drop even further, since the same yield statistics that apply to an entire chip also apply to the spares.
A solution to this problem is to provide a general-purpose functional block architecture similar to the XILINX dynamically configurable FPGAs referenced above. If each functional block on a chip is identical, faults on the chip can be easily bypassed by routing the circuitry around any functional blocks rendered non-functional by defects. If the granularity of the functional blocks is sufficiently small, the number of defective functional blocks will be small compared to the number of operable functional blocks. Thus an FPGA which has functional blocks of sufficiently small granularity will not be significantly impacted by defects, making the FPGA fault-tolerant. A wafer of such FPGAs would allow bypassing of all defects, rendering the entire wafer fault-tolerant. In this manner one large circuit could be made using the entire area of a wafer. This process, called waferscale integration, would allow circuits to be developed and fabricated that are fault-tolerant and, hence, not susceptible to the traditional drop in yield associated with increasing the size of the die.
Therefore, there existed a need to provide a waferscale integrated circuit device and method which provides a high level of complexity and functionality while still maintaining high production yields through a programmable functional block architecture and a fault-tolerance scheme that utilizes the functional blocks of the device that are functional after fabrication and does not utilize the functional blocks of the device that are non-functional after fabrication. These functional blocks must be of sufficiently small granularity to assure a high yield of functional blocks with respect to manufacturing defects.