As integrated circuit feature size continues to shrink, more functional blocks are integrated in a single chip. Meanwhile, complex fault models are often required to detect the defects emerged from the shrinking technologies and new materials. It in turn causes dramatic increase of test data volume and test application time. On-chip test compression has become a standard DFT methodology in industry today.
In the past decades, a large number of test compression schemes have been proposed. They can be classified into three categories, stand-alone BIST, hybrid BIST, and test data compression. The original idea of test data compression approach, known as LFSR-coding, exploits the fact that the number of specified bits in the test cubes is typically no more than 1% of total number of scan cells in the design and the test data compression is achieved by encoding the specified bits as a LFSR seed. During test, the seed is decompressed by an on-chip LFSR (Linear Finite State Register). The same fact is utilized by the following test data compression schemes to reduce test data volume.
Depending on the implementation of the decompressing hardware, the schemes for test stimulus compression include code-based schemes, broadcast-based schemes, linear-decompressor based schemes, etc. The linear-decompressor based schemes typically achieve better encoding efficiency than the other two types of schemes. The function of the linear-decompressor can be described by a linear Boolean equation AX=Y, where A is a characteristic matrix, X represents compressed test stimuli supplied from the tester, and Y represents uncompressed test stimuli shifted into scan chains.
The combinational linear decompressor implements the characteristic matrix by using an XOR network, of which the encoding capability at a shift cycle is restricted to the number of inputs of the XOR network. The sequential linear decompressor inserts a linear finite-state machine such as LFSR, or ring generator between the decompressor inputs and the XOR network. It improves the encoding capability by utilizing the compressed test stimuli shifted-in at both current and previous cycles to encode the test stimulus needed at the current shift cycle. In static reseeding approaches, the specified bits in a test cube are encoded by using LFSR seed and the LFSR size has to be no less than the number of specific bits in the test cube. Through injecting the compressed test stimuli continuously during shift, the dynamic reseeding approaches increase the encoding capability significantly while allowing the use of the LFSR with a smaller size.
As the number of cores integrated in a system-on-chip circuit design increases, the number of top level pins is far less than the requirements to test a large number of cores in parallel. To reduce the test application time, the efficiency of utilizing limited tester bandwidth must be improved. The specified bits in the test cubes generated by dynamic compaction have been found to be non-uniform distributed. The tester bandwidth can thus be reduced through dynamic allocation of the input channels feeding to different cores in the system-on-chip circuit design. To support this scheme, de-multiplexers are inserted between top level channel inputs and core inputs that allow dynamic configuration of the channel inputs feeding to each core. In a paper by J. Janicki, et al., entitled “EDT Bandwidth Management in Soc Design,” in IEEE Tran. on CAD, vol. 31, no. 2, December 2012, pp. 1894-1907, the control data for de-multiplexers are supplied in pattern based manner and uploaded through the same channel inputs to provide compressed test patterns. In another paper by G. Li, et al., entitled “Multi-Level EDT to Reduce Scan Channels in Soc Designs,” in Proc. ATS, 2012, pp. 77-82A, a cycle-based method was proposed to allocate the channel inputs. Although the method proposed by G. Li, et al. provides more flexibility than J. Janicki, et al., dedicated control signals must be added for each core to control the configuration of the channel inputs, making it not scale up well for a circuit design with a large number of cores.
One way to improve the tester bandwidth without dynamically allocating the channel inputs is to reduce the number of channels used by each core. Testing each core in extremely high compression environment allows more cores to be tested in parallel. Unfortunately, reducing the number of channel inputs feeding to a core implies lower encoding capacity. As a result, testable faults may become undetected due to the lack of encoding capacity. More test patterns are often needed to achieve the same test coverage since less number of faults can be detected by each test cube during dynamic compaction.