In recent several years, ASIC (Application Specific Integrated Circuit) technology has evolved from a chip-set philosophy to an embedded cores based system-on-a-chip (SoC). An SoC is an IC designed by stitching together multiple stand-alone VLSI designs (cores) to provide full functionality for an application. Namely, the SoCs are built using pre-designed models of complex functions known as “cores” (also known as Intellectual Property or IP) that serve a variety of applications. These cores are generally available either in high-level description language (HDL) such as in Verilog/VHDL, or in transistor level layout such as GDS II. An SoC may contain combinations of cores of different functions such as microprocessors, large memory arrays, audio and video controllers, modem, internet tuner, 2D and 3D graphics controllers, DSP functions, and etc.
After the design stage conducted under an EDA (electronic design automation) environment, the SoC design is implemented in the form of a silicon chip. This invention is directed to a methodology for evaluating the SoC design in the form of silicon (“silicon debug”) for each core as well as an SoC chip as a whole. While such system-chips serve for broad applications, the complexity of these chips is far too complex to be tested by conventional means. (“Testing embedded cores” A D&T Roundtable, IEEE Design and Test, pp.81-89, April-June 1997, “Challenge of the 90's Testing CoreWare based ASICS” Panel on “DFT for embedded cores”, R. Rajsuman, International Test Conference, pp. 940, 1996).
In addition to the difficulties in the production testing, these SoCs also present major difficulty in determining their functional correctness when prototype silicon is manufactured. The primary cause of the difficulty is limited observability and controllability of individual cores. In general, only the chip I/Os (input and output of SoC chip) are accessible to apply test vectors or to observe responses to the test vectors while I/Os of each embedded core are not accessible. Thus, in a complex SoC, many internal faults do not show-up at the chip I/Os.
FIG. 1 schematically illustrates an example of general structure of SoC. In this example, an SoC 10 has an embedded memory 12, a microprocessor core 14, three function-specific cores 16, 18 and 20, PLL (phase lock loop) 22 and TAP (test access port) 24. The overall testing of SoC can be done only through the chip-level I/Os. In this example, such chip level I/Os are established by chip I/O pads 28 formed on an I/O pad frame 26 at the outer periphery of SoC 10. Each of the functional cores 12, 14, 16, 18 and 20 includes a pad frame 29 which typically contains multiple layers of I/O pads at core periphery. Generally, in IC design, the top metal layer is used for power sources (power pads 32) while intermediate metal layers are used for I/O or signal pads for interfacing with other cores, microprocessor core and embedded memory.
In the case where a failure exists, it is important to know the cause of the failure, such as whether it is due to the microprocessor core 14 or the function specific cores 16, 18 or 20, or other causes such as an interface between the cores. The reason that debugging the failure is necessary is that the failure must be corrected before the SoC design is sent to mass production.
One of the conventional technologies for fault diagnosis is based on fault dictionary (R. Rajsuman, M. Saad and B. Gupta, “On the fault location in combinational logic circuits”, IEEE Asilomar Conference, pp. 1245-1250, 1991, A. k. Sonami, V. k. Agarwal and D. Avis, “A generalized theory for system level diagnosis”, IEEE Trans. Computer, pp. 538-546, May 1987). An automatic test pattern generation (ATPG) tool generates many vectors for each stuck-at fault and collapses these vectors to cover each fault just once. The examples of such tools are commercial tools such as Synopsys Tetramax or tools developed in academic environment such as Socretes.
The test vector reduction in ATPG tools provides a compact test set, however, a large amount of information is lost during the test vector compaction that is vital for fault diagnosis. To overcome the loss of such information, “fault dictionary” is used, which is basically a database that lists all vectors, their corresponding faults and sometimes corresponding fault propagation cone that is active either during fault sensitization or during fault effect propagation. Traditionally, from the fault dictionary, one can identify an area (active cone) that has the fault.
One very serious limitation of this method is that it requires direct access to the internal I/Os of core so that additional test vectors from fault dictionary can be applied to identify the faulty region. Some attempts have been made to use an electron beam tester (N. Kuji, T. Tamara and M. Nagatani, “FINDER: A CAD system based electron beam tester for fault diagnosis of VLSI circuits”, IEEE Trans. CAD, pp. 313-319, April 1986), or full scan circuits (K. De and A. Gunda, “Failure analysis for full-scan circuits”, IEEE Int. Test Conference, pp. 636-645, 1995).
At the present time, IEEE P1500 working group is developing a solution so that core I/Os become accessible. This solution is based upon use of extra logic that includes a shift-register based wrapper at the core I/Os and a data transport bus from chip I/Os to core I/Os (IEEE P1500 web-site, http://qrouper.ieee.org/groups/1500/, “Preliminary outline of the IEEE P1500 scalable architecture for testing embedded cores”, IEEE VLSI Test Symposium, 1999). This structure is illustrated in FIGS. 2A-2C where FIG. 2A shows an overall wrapper at an outer boundary of a core and FIGS. 2B and 2C respectively show structures of input cell 42 and output cell 44 in the wrapper of FIG. 2A.
Similar solutions based upon core wrapper and data transport logic have also been proposed by the Virtual Socket Interface Alliance (VSIA) and other researchers (Manufacturing related test development specification 1″, version 1.0, VSI Alliance, 1998; and “Test access architecture” VSI Alliance, 2000, R. Rajsuman, “System-on-a-Chip: Design and Test”, Artech House Publishers Inc., ISBN 1-58053-107-5, 2000, D. Bhattacharya, “Hierarchical test access architecture for embedded cores in an integrated circuit”, IEEE VLSI Test Symposium, pp. 8-14, 1998).
The major drawbacks in these methods are that they require extra logic that increases chip size and hence the cost; and performance penalty because of the wrapper at the core I/Os. An example of such performance penalty includes signal propagation delays in SoC because of the additional circuit components and paths. Also, in all cases, a test vector is shifted-in the wrapper register and response is shifted-out using multiple clock cycles. Until the response of previous vector is completely shifted-out, a new test vector cannot be applied. Hence, these solutions cannot help in diagnosis of timing related failure because at-speed testing cannot be done. Further, in all these solutions, testing time become too long, which means excessive cost.
Another conventional approach is a “bed of nails” type method described in U.S. Pat. Nos. 4,749,947 and 4,937,826. In this method, a grid of wires is created on which the functional circuit to be tested is placed. Every node in the functional circuit can be accessed by a vertical transistor that can provide connection from node to the grid-wires. In principle, this method provides 100% observability. However, this method is extremely expensive as it requires multiple additional steps (layout masks) and modification in the existing manufacturing process of SoC. Also, because of the presence of grid of wires, it significantly increases circuit parasitic capacitance and results in performance penalty.
As in the foregoing, the conventional technologies are not satisfactory for fully debugging individual core and interconnects in SoC or identifying faulty locations in the SoC without drawbacks such as increasing the size and cost or involving the performance penalty.