Integrated circuit (IC) chips often embody complex, large scale circuitry such as, for example, random access memories (RAMs), various programmable state machines ranging from simple controllers to complex instruction set central processing units (CPUs), digital input/output buffers, electrically erasable programmable memories (EEPROMs), analog to digital converters (ADCs), digital to analog converters (DACs), and various analog circuitry such as, for illustrative example, pre-amplifiers, equalizers, frequency tunable bandpass filters, temperature sensors, and power converters.
In addition to complexity, digital circuitry of IC chips is operating at increasing clock rates and, similarly, analog circuitry at higher frequencies and wider bandwidths. Such performance parameters generally necessitate smaller feature (e.g., transistor) sizes and, similarly, a higher precision and control for each of a larger sequence of chip fabrication steps.
As known in the IC chip fabrication arts, despite continuing progress in fabrication technology and equipment quality, faults sometimes occur in the fabrication process. For purposes of this description, the term “fault” means any fabrication error that results in the finished IC chip failing, at any point within its given range of operating environments, to meet all of its given function and performance specifications, in response to any possible combination and/or sequence of signal inputs and/or program instructions that the IC chip may encounter while performing in its specified or intended system environment.
Various methods for detecting such faults are known, typically applying values and sequences input signals, instruction sets, and other conditions, methodically formulated or calculated to exercise at least a sufficient percentage of, or subsets of the devices (e.g., logic gates, flip-flops), required to detect faults, to at least a specified probability.
Often faults may manifest only during certain operating conditions, or when the IC chip is operating in particular modes, or only in response to certain sequences of chip operations and/or signal inputs. The testing therefore must employ particularly calculated values and sequences of, for example, chip control and input signals, both to check how the IC chip or system responds to particular sequences previously identified as detecting faults, and to identify faults not detected in previous testing of the same type of IC chip.
One method for such testing for such faults is termed “boundary scan testing.” The theory of boundary scan testing is known to persons of ordinary skill in the IC chip arts, and to persons of ordinary skill in the multi-chip, packaged system arts. Further detailed description of the theory of boundary scan testing is therefore omitted.
However, also known to persons of ordinary skill in the IC chip and multi-chip system arts is the overhead in terms input/output (I/O) pins necessary for boundary scan testing to meet even the existing testability requirements. A prime example is the four-to-five pins required by the IEEE 1149.1 standard, (formally entitled “Standard Test Access Port and Boundary-Scan Architecture”) and much more commonly referred to as “JTAG”, the acronym of the industry group called the “Joint Test Action Group,” that developed it. As known to persons of ordinary skill in the art, the I/O pin requirement of JTAG is often termed “four-to-five” pin because one of the signals, the “Test Rest,” or “TRST” is optional.
The JTAG standard was released in 1990, and has been adopted industry-wide from approximately 2001 to the present. The problem of the I/O pin overhead required to meet the four-to-five pin JTAG interface, and the need for a practical, economical solution to the overhead problem, has been known since the adaptation of the standard.