The present invention relates to digital counters, and more particularly to a digital counter which can be easily and thoroughly tested with a short sequence of random (or pseudo-random) input vectors.
A growing concern associated with the production and use of VLSI chips is the problem of testing for defects in the chip. Fault testing is required at several stages of a chip's life. The first time that testing of the chip is required is when the chip is still part of a wafer. Since the yield for VLSI chips is relatively low, the chip manufacturer must run a manufacturing test on all the chips on the wafer to weed out the bad ones. This involves testing the chip with a set of vectors supplied by the designer in order to ensure that a good chip is being delivered. In the past, a manufacturing test was accomplished by externally applying specially designed test vectors to the chip inputs that would either excite the internal logic directly, or by serially loading set scan chains. The result of the excitation was observed either directly at the chip output or by shifting out set scan chains. These results were then compared to expected results by test support equipment. Designing a set of manufacturing test vectors that would adequately test the chip required an engineer to have in depth knowledge of the internal logic of the chip. As VLSI chips become denser, this task becomes extremely time consuming, and the increased number of vectors necessary to test the chip means that manufacturing tests take longer to run and require more expensive support equipment.
Once a chip has passed manufacturing test, it is delivered for integration into a system. During system integration, the proper operation of the chip will need to be verified at least once, and probably more, since the check out and integration process sometimes exposes faults that the manufacturing test missed, and sometimes even causes faults. System integration testing of the chip is accomplished both by running functional diagnostic software, and by utilizing special stand-alone test equipment to perform manufacturing type tests.
Once the system is in use, proper operation of the chip will need to be verified periodically by built-in self test (BIST) software in order to ensure that the chip has not been damaged, or that a latent defect has not developed. System level BIST is accomplished by running functional diagnostic software, but the BIST diagnostics are usually less thorough than integration test diagnostics since integration tests are done in the lab environment and can be written to utilize lab equipment, and because BIST is limited by real time constraints. Again, as VLSI chips become denser, the task of writing diagnostic software for both BIST and for integration testing becomes more complicated and time consuming, and the tests themselves take longer to run and require more memory. The special test equipment required for stand-alone integration testing is also required to be more sophisticated (and expensive) as the chips being tested become denser and faster.
An alternate method of testing VLSI chips for faults is the implementation of a pseudo-random vector generation (PRG) test. As the name implies, test vectors for a PRG test are generated pseudo randomly, thereby eliminating the very expensive procedure of human test vector design. The results of the PRG vectors can be compared against expected results to verify that the chip is fault free. Autonomous self test (AST) is a scheme in which a chip can be commanded to perform a PRG test on itself. With an AST scheme, pseudo-random patterns are generated to excite the core logic of the chip while the results of the excitation are compiled and compared to an expected result, all internally. In order to accomplish this, registers in the chip are serially connected into set scan chains which can be loaded with pseudo-random data from an AST controller, which resides on the chip. During AST, logic clouds (combinational logic that exists between the functional Q output of one or more registers and the functional D input of a register) receive pseudo-random test excitation from scan chains, while the results of the excitation are observed by scan chain registers at the combinational cloud outputs. The results are compiled and compared against an expected result by the AST controller.
Clearly, there are many advantages of implementing an AST scheme test. Manufacturing test becomes less expensive because the only patterns that need be written by a human are those that are required to invoke AST. Also, since the number of vectors that must be externally applied to the chip are greatly reduced, the need for more expensive test support equipment is eliminated. For system integration, the need for stand-alone testers and supporting software is completely eliminated. The complexity, design time, execution time and memory requirements for diagnostic software used for system integration and for BIST are reduced significantly since the software needs only to invoke the chip AST.
Designing logic that is testable by a PRG test imposes new constraints on the designer. The challenge is to drive each primitive element in a logic cloud to all input states required to achieve 100% "stuck at" testing, while propagating the responses to the cloud output, where they are observed by scan chain registers. Stuck-at testing finds failures which are manifested by a circuit node being permanently fixed (i.e., "stuck at") either the high or low state. In order to accomplish this, logic must be designed so that application of a small number of pseudo-random inputs causes a high probability of excitation and observation of all primitive element inputs and outputs.
Counters often cause problems for any type of fault finding tests, and PRG tests are no exception. In order for PRG tests to be effective, each signal node must have a reasonable probability of being observed at both possible logic levels, high and low. The problem with most counter designs is that in order for a particular bit of the counter to be enabled to count, all of the less significant bits must be logic 1's for an up counter, or logic 0's for a down counter, and the counter enable control must be in the "enabled" state. This means that the probability of detecting faults continues to get lower for each bit that a counter has, i.e., the probability of detecting a "stuck at" (SA) fault at the nth bit of a counter is 1/2.sup.n. The Full Scale (FS) signal is also a problem for PRG test, since FS is simply a logical "AND" function with as many inputs as there are bits in the counter. The probability of the FS signal being active for an n bit standard counter is 1/2.sup. n. Even for standard counters with a relatively low number of bits, these probabilities of fault detection can make a PRG test of the counter ineffective.
There exists a multitude of different varieties of counters available in MSI packages and as macros for custom gate arrays. However, none of these provide the controllability or observability necessary to make a PRG test feasible. Further, no technique for implementing a fault test of any kind is suggested or implied by the producers of these counters.
There is therefore a need for a counter in a VLSI circuit which is PRG testable.