As is known, programmable devices are a class of general-purpose integrated circuits that can be configured for a wide variety of applications. Such programmable devices have two basic versions; mask programmable devices, which are programmed only by a manufacture, and field programmable devices, which are programmable by the end user. In addition, programmable devices can be further categorized as programmable memory devices or programmable logic devices. Programmable memory devices include programmable read only memory (PROM), erasable programmable read only memory (EPROM) and electronically erasable programmable read only memory (EEPROM). Programmable logic devices include programmable logic array (PLA) devices, programmable array logic (PAL) devices, erasable programmable logic devices (EPLD) devices, and programmable gate arrays (PGA).
Field programmable gate arrays (FPGA) have become very popular for telecommunication applications, Internet applications, switching applications, routing applications, and a variety of other end user applications. In general, an FPGA includes programmable logic fabric (containing programmable logic gates and programmable interconnects) and programmable input/output blocks. The programmable input/output blocks are fabricated on a substrate supporting the FPGA and are coupled to the pins of the integrated circuit, allowing users to access the programmable logic fabric. The programmable logic fabric may be programmed to perform a wide variety of functions corresponding to particular end user applications. The programmable logic fabric may be implemented in a variety of ways. For example, the programmable logic fabric may be implemented in a symmetric array configuration, a row-based configuration, a column-based configuration, a sea-of-gates configuration, or a hierarchical programmable logic device configuration.
As is further known, field programmable gate arrays allow end users the flexibility of implementing custom integrated circuits while avoiding the initial cost, time delay and inherent risk of application specific integrated circuits (ASIC). While FPGAs have these advantages, there are some disadvantages. For instance, an FPGA programmed to perform a similar function as implemented in an ASIC can require more die area than the ASIC. Further, performing at-speed testing of an FPGA can be difficult.
In particular, as processed technology shrink to 130 and 90 nanometers, speed related defects become more and more of an issue. Resistive via or resistive bridge defects between two neighboring metal lines can cause a transition fault. A transition fault refers to a gate or a path that fails to meet timing requirements due to a manufacturing defect. Unlike a stuck-at fault, which can be detected by appropriate application of vectors and observation of outputs, transition faults have an added requirement of at-speed testing.
The transition fault model is a modified version of a stuck-at fault model in which there is an additional restriction of speed. A transition fault testing aims to catch faults related to slow-to-rise and slow-to-fall transitions. One possible cause for slow-to-rise and slow-to-fall transitions are bridging falls that slow down the transition time of a gate, but eventually produce the correct value. These types of faults are not detected with conventional low speed tests.
In addition, many such transition faults are candidates for future reliability failures. Because a part that starts out with marginal or timing-related defects may turn into hard failures in the field. Thus, testing for speed related defects becomes increasingly important.
One obvious way to catch speed related defects is to run the conventional tests at high speed. High speed tests of integrated circuits have a variety of practical problems associated with them. The test hardware has to reliably apply and sample the vectors at very high speeds. This tends to increase the cost of the tester. Also, high speed applications of vectors results in high current consumption and the device under test will heat up. It is not very practical to employ sophisticated heat syncs in the test environment.
The problems with high speed tests are well recognized by the industry. In the application specific integrated circuit (ASIC) world, there are several approaches to at-speed testing. In one approach, at-speed testing utilizes built-in circuitry that internally applies the vectors and compacts the output before presenting it to an outside tester. In this way, the tester-device interface will be a slow one, whereas the device under test is tested at high speeds. This solution is generally referred to as built-in-self-test (BIST), but tends to be difficult to implement and requires additional silicon area. Further, BIST testing is not deterministic. Since the BIST technique utilizes pseudo random test vectors which do not target specific faults.
Another approach is to use automated test pattern generation (ATPG) and the scan test infrastructure to deliver a series of closely spaced pulses to test for transition faults. The circuit under test goes through an at-speed transition by use of launch and capture cells. The closely spaced pulses create the at-speed test environment. The average power consumption is kept low because closely spaced pulses are sparse. This technique is not being employed for FPGA's.
Further, in current testing of application specific FPGA's, there are capture cells associated with every flip-flop in the FPGA that can be read back and the value read out during the test. Read back, however, is a slow process that takes an amount of time equivalent to numerous clock cycles. When read back occurs, the next tester clock occurs much later than what is required for at-speed testing. Typically, a tester imposes constraints on the speed of the clocks applied to the circuit and, within a tester, it is difficult to create a narrow width pulse clock.
Therefore, a need exists for a method and/or apparatus of at-speed testing of programmable logic devices, including FPGAs.