The present invention is related in general to the field of semiconductor devices and testing and more specifically to a testing methodology assuring device quality and reliability without conventional burn-in while using a low-cost tester apparatus.
W. Shockley, the inventor of the transistor and Nobel prize winner, demonstrated in the late ""50s and early ""60s the effect of fabrication process variations on semiconductor device performance; he specifically explored the dependence of the p-n junction breakdown voltage on local statistical variations of the space charge density; see W. Shockley, xe2x80x9cProblems Related to p-n Junctions in Siliconxe2x80x9d, Solid-State Electronics, vol. 2, pp. 35-67, 1961.
Since that time, numerous researchers have investigated semiconductor integrated circuit (IC) process steps to show that each process step has its design window, which in most cases follows a Gaussian bell-shaped distribution curve with unavoidable statistical tails. These researchers have illuminated how this statistical variation affects the performance characteristics of semiconductor devices, and how to keep the processes within a narrow window. The basis for determining the process windows was in most cases careful modeling of the process steps (such as ion implantation, diffusion, oxidation, metallization, junction behavior, effect of lattice defects and impurities, ionization, etc.); see for example reviews in F. van de Wiele et al., xe2x80x9cProcess and Device Modeling for Integrated Circuit Designxe2x80x9d, NATO Advanced Study Institutes Series, Noordhoff, Leyden, 1977. Other modeling studies were addressing the simulation of circuits directly; see, for example, U.S. Pat. No. 4,744,084, issued May 10, 1988 (Beck et al., xe2x80x9cHardware Modeling System and Method for Simulating Portions of Electrical Circuitsxe2x80x9d).
Today, these relationships are well known to the circuit and device designers; they control how process windows have to be designed in order to achieve certain performance characteristics and device specifications. Based on these process parameters, computer simulations are at hand not only for specification limits, but within full process capability so that IC designs and layouts can be created. These xe2x80x9cgoodxe2x80x9d designs can be expected to result in xe2x80x9cgoodxe2x80x9d circuits whenever xe2x80x9cgoodxe2x80x9d processes are used in fabrication; device quality and reliability are high. Based on testing functional performance, computer-based methods have been proposed semiconductor device conformance to design requirements. See, for example, U.S. Pat. No. 5,668,745 issued Sep. 16, 1997 (Day, xe2x80x9cMethod and Apparatus for Testing Semiconductor Devicesxe2x80x9d).
However, when a process is executed during circuit manufacturing so that it deviates significantly from the center of the window, or when it is marginal, the resulting semiconductor device may originally still be within its range of electrical specifications, but may have questionable long-term reliability. How can this be determined? The traditional answer has been the so-called xe2x80x9cburn-inxe2x80x9d process. This process is intended to subject the semiconductor device to accelerating environmental conditions such that the device parameters would show within a few hundred hours what would happen in actual operation after about 2 years.
In typical dynamic burn-in, circuit states are exercised using stuck-fault vectors. The accelerating conditions include elevated temperature (about 140xc2x0 C.) and elevated voltage (Vdd about 1.5xc3x97nominal); the initial burn-in is for 6 hr, the extended burn-in is 2 sets of 72 hr, with tests after each set. Since 6 hr burn-in is equivalent to 200 k power-on hours, device wearout appears early, the reliability bathtub curve is shortened, and the effect of defects such as particles will be noticed.
There are several types of defects in ICs, most of which are introduced during the manufacturing process flow. In the last three decades, these defects have been studied extensively; progress is, for example, reported periodically in the Annual Proceedings of the IEEE International Reliability Physics Symposium and in the reprints of the Tutorials of that Symposium.
In the so-called bathtub curve display, the number of failures is plotted versus time. The initial high number of failures is due to extrinsic failures, such as particulate contamination, and poor process margins. The number of failures drops sharply to the minimum of intrinsic failures and remains at this level for most of the device lifetime. After this instantaneous or inherent failure rate, the number of failures increases sharply due to wearout mortality (irreversible degradation such as metal electromigration, dielectric degradation, etc.).
Based on functional tests and non-random yields, automated methods have been proposed to analyze defects in IC manufacturing and distinguish between random defects and systematic defects. See, for example, U.S. Pat. No. 5,497,381, issued Mar. 5, 1996 (O""Donoghue et al., xe2x80x9cBitstream Defect Analysis Method for Integrated Circuitsxe2x80x9d).
For burn-in, the devices need facilities equipped with test sockets, electrical biasing, elevated temperature provision, and test equipment. Considering the large population of devices to be burned-in, the expense for burn-in is high (floor space, utilities, expensive high-speed testers for final device test, sockets, etc.). As an example of a proposal to avoid burn-in, see J. A. van der Pol et al., xe2x80x9cImpact of Screening of Latent Defects at Electrical Test on the Yield-Reliability Relation and Application to Burn-in Eliminationxe2x80x9d, 36th Ann. Proc. IEEE IRPS, pp. 370-377, 1998. It is proposed that voltage stresses, distribution tests and Iddq screens are alternatives to burn-in, but the tests cover only device specification and are thus too limited and expensive.
An additional concern is the effect burn-in has on the devices which are subjected to this procedure. After the process, many survivors are xe2x80x9cwalking woundedxe2x80x9d which means that their probable life span may have been shortened to an unknown degree.
In addition to the greatly increased cost for burn-in, the last decade has seen an enormous cost increase for automatic testing equipment. Modern high-speed testers for ICs cost in excess of $1 million, approaching $2 million. They also consume valuable floor space and require considerable installation (cooling) effort. These testers not only have to perform the traditional DC parametric device tests, but the ever more demanding functional and AC parametric tests. DC parametric tests measure leakage currents and compare input and output voltages, both of which require only modest financial investment. Functional tests are based on the test pattern of the device-to-be-tested, a tremendous task for the rapidly growing complexity of modern ICs. AC parametric tests measure speed, propagation delay, and signal rise and fall. These tests are combined to xe2x80x9cat speedxe2x80x9d functional tests. For the required timing control, calibration, and many patterns at high speed, the lion share of the financial investment is needed (between 80 and 95%). Included here are the pattern memory and timing for stimulus and response, format by combining timing and pattern memory, serial shift registers (scan), and pattern sequence controller.
Traditional automatic test equipment (ATE) incorporates expensive, high performance pattern memory subsystems to deliver complex test patterns during production test of digital ICs. These subsystems are designed to deliver wide patterns (typically 128 to 1024 bits) at high speeds (typically 20 to 100""s MHz, more than 400 MHz on new devices). The depth of the pattern storage is typically 1 to 64 million. The width, speed and depth of the pattern memory requirements, along with the sequencing capability (loops, branches, etc.) combine to significantly affect the cost of the pattern subsystem, to the extent that most pattern subsystems represent a significant component of the overall ATE cost.
The traditional pattern memory subsystem limitations are often the source of test program development problems and initial design debug inefficiencies. The number of test patterns required is proportional to the number of transistors in a device. As the device integration rapidly progresses, the corresponding test pattern requirements will present increasingly difficult challenges for cost effective traditional pattern memory subsystems.
In summary, the goal of avoiding the expensive burn-in procedure and replacing it by a low-cost, fast, reliable and flexible procedure has remained elusive, until now. An urgent need has, therefore, arisen for a coherent approach to both a low-cost method and a low-cost testing equipment offering a fundamental solution not only to avoid burn-in, but to guarantee quality and reliability of semiconductor devices in general, and to achieve these goals with testers of much reduced cost. The method should be flexible enough to be applied for different semiconductor product families and a wide spectrum of design and process variations and should lend itself as a guiding tool during wafer fab processing as well as after testing at multiprobe and after assembly and packaging. The method and the testers should increase manufacturing throughput and save floor space, time and energy. Preferably, these innovations should break the stranglehold of cost increases for fast testers which are today a significant part of the skyrocketing cost of IC device production, and expedite the time-to-market required for new IC products.
The present invention provides a testing method and apparatus assuring IC device quality and reliability by testing yield based on process capability and not yield based on device specifications. With this fundamental change intest methodology, burn-in can be eliminated or reduced to a few percent of the product volume having questionable characteristics, and the cost of testers is reduced to about 5% of today""s high-speed tester cost. The levels of reliability are comparable to six-sigma levels (the six-sigma methodology is defined in relation to specifications).
The method for assuring quality and reliability of IC devices, fabricated by a series of documented process steps, comprises first functionally testing the devices outside their specified voltage range, yet within the capabilities of the fabrication process steps, then interpreting these electrical data to provide non-electrical characterization of the devices, thereby verifying their compositional and structural features, and finally correlating these features with the fabrication process steps to find deviations from the process windows, as well as structural defects.
The present invention can be applied to all logic devices, specifically those made by CMOS technology, such as wireless products, hard disk drives, digital signal processors, application-specific devices, mixed-signal products, microprocessors, and general purpose logic devices. It can also be adapted to Memory products, and expanded to parallel imbedded testing, analog testing, and optimized JTAG/scan.
Based on the well-proven premise that good designs result in good products when good manufacturing processes are used, the present invention avoids the traditional method of at-speed functional testing and testing of propagation delays, and rather tests instead for so-called process xe2x80x9coutliersxe2x80x9d, both of the systematic and the non-systematic kind. The outlier methodology verifies critical process parameters on each chip, such as voltage box for Vdd, tight Iddq, and leakage current, as opposed to conventional methodology which verifies electrical specification on each chip for each specified parameter.
The outlier methodology emphasizes logic testing including testing at-speed built-in self test (BIST), delay fault, I-drive, and wide voltage box. The tester of the present invention is capable of DC testing, including continuity, leakage current, Iddq, and logic testing, including slow functional tests, serial scan, algorithmic patterns (for memory devices), delay path fault, and at-speed BIST. However, the tester of the present invention does not need traditional at-speed functional device testing.
Newer test methodologies, design-for-test (DFT) and BIST techniques are reducing the need to deliver test patterns at high speed to several classes of advanced logic devices. Relaxing traditional at-speed test requirements represents an opportunity to significantly reduce the cost of ATE. However, even with reduced pattern speed requirements, the depth, width and complexity of the required pattern sequences can still have a significant impact on the architecture and cost of the ATE. The invention takes advantage of the potentially lower pattern speed requirements of devices compatible with newer test methodologies, DFT or BIST techniques, by eliminating the need for a traditional pattern memory subsystem, thereby avoiding a significant component of ATE cost.
In one embodiment of the present invention, the (relatively low cost) workstation or general purpose computer controlling the tester is used as a xe2x80x9cvirtualxe2x80x9d pattern memory system. In this function, the computer stores and delivers digital test patterns and thus replaces the (expensive) pattern sequence controller (pattern memory sub-system) in traditional testers. The workstation is needed for other tester control functions anyway, so it represents no added cost.
In an embodiment of the invention, the tester controller is a high performance work station providing user interface, factory connectivity, as well as tester control. The workstation uses xe2x80x9cvirtualxe2x80x9d memory for program and data storage. Test patterns are stored in the workstation memory as direct memory access (DMA) blocks, and transferred to the device under test (DUT) for digital stimulus and response comparison, as needed, during production testing. Although the pattern data is not transferred xe2x80x9cat speedxe2x80x9d to the DUT, the use of DMA techniques ensures that the patterns are transferred as efficiently as possible, in order to minimize test time.
In a specific embodiment of the invention, the pattern bits stored in the tester controller are executed as a stream of DMA blocks in order to generate the traditional parallel pattern simultaneously applied to the device-under-test.
Since the workstation""s memory is xe2x80x9cvirtualxe2x80x9d, the test pattern depth is no longer constrained by traditional pattern memory subsystem costs and limitations, avoiding difficult cost/limitation tradeoff decisions.
In another embodiment of the invention, only the changes between patterns have to be stored and loaded. This feature makes pattern storage much more efficient. In contrast, traditional pattern memory systems store all digital information for all pins for each pattern, even though only a small percentage of the total information changes from pattern to pattern. This increases storage requirements and increases the amount of time needed to compile and load pattern information into the tester.
In yet another embodiment of the invention, the workstation capability to create execution flexibility is exploited to avoid the cost/limitation tradeoff of traditional pattern memory systems due to dedicated hardware. Conventionally, loops, repeats, branches, subroutines, etc. are limited by the architecture and cost of the pattern memory subsystem hardware. In contrast, according to the invention, the pattern sequences are controlled by the workstation software. The only limit to pattern execution flexibility is software execution overhead. This overhead, however, is generally small with respect to the total pattern execution time in this application.
The technical advances represented by the invention, as well as the objects thereof, will become apparent from the following description of the preferred embodiments of the invention, when considered in conjunction with the accompanying drawings and the novel features set forth in the appended claims.