Logic networks comprise elemental logic blocks for performing Boolean logic functions on data. Logic blocks are commonly identified by the Boolean logic function which they individually perform. Logic blocks can take the form of simple logic gates for example, "AND", "OR", "NAND", and "NOR" logic gates, to name a few, which are well known in the art. Logic blocks can also be "combinational" logic, where simple logic gates are cascaded in electrical series and/or parallel to collectively perform a Boolean logic function upon logic inputs to provide a logic output. In combinational logic, the overall logic function is determined by the individual logic functions performed by the individual logic gates.
At present, when logic circuits are being designed, a logic designer initially focuses upon the requisite global functionality of the logic network. The designer knows what the inputs and outputs to the overall logic network will be. Further, the designer knows what the output logic states should be, based upon all of the various combinations of input logic states. From the foregoing design parameters, the designer derives a workable logic network to provide the desired functionality with the fundamental logic blocks. The resultant logic network may have numerous successive levels of logic.
During the next stage in the design of logic circuits, the logic network is optimized (also, "minimized") usually with the aid of a computer program (Computer Automated Design; CAD) utilizing one or more of many known minimization algorithms. Logic optimization is desirable because it enhances circuit reliability, increases the overall speed of the logic network and reduces the number of circuits.
One form of logic optimization that has been difficult and time-consuming in the art is the identification and removal of redundant logic. Redundant logic may take the form of "untestable" logic or actual "logically redundant logic." Untestable logic refers to logic which does not affect the functional outcome of logic network. In other words, even if the logic failed, it would not affect the overall function of the logic network, and in fact, cannot be noticed under any condition. Logically redundant logic means that a necessary logic function is performed more than once, but is testable.
Thus, logic blocks which are identified as performing redundant logic functions are eliminated to optimize the logic network. Three major forms of redundancy removal are known in the industry: local, global, and two-level optimization using Boolean algebraic techniques. Local optimization is similar to so-called "peephole" optimizations in a compiler. In local optimization, specific patterns are looked for in small areas of the logic, until every small area of the logic has been examined. In comparison, global optimization employs focusing upon large areas of logic. Two-level optimization is described below.
Many Boolean algebraic techniques are available for optimization of large multi-level logic networks. Examples are described in R. K. Brayton, "Factoring Logic Functions," IBM Journal of Research & Development, Vol. 31, No. 2, March 1987; R. K. Brayton, R. Rudell, A. S. Vincentelli, and A. R. Wang, "MIS: A Multiple-Level Logic Optimization System," IEEE Trans. on CAD, Vol. 7, No. 6, June 1988.
Generally, when performing a Boolean analysis, the multi-level logic network is reconfigured into two successive levels of logic blocks between the primary logic inputs and outputs. Most logic networks, or at least parts thereof, can be modelled by two levels of logic via the previously mentioned conventional techniques.
During the Boolean algebraic minimization process, the Boolean equation of each logic output is manipulated until a two-level configuration of logic blocks is realized for each output, where each output is defined by either a "sum of products" or a "product of sums." A "sum of products" is essentially a logic configuration where the first level comprises exclusively AND logic blocks, while the second level consists of an OR logic block. In contrast, a "product of sums" is essentially a logic configuration where the first level only comprises OR logic blocks, while the second level consists of an AND logic block.
Pursuant to conventional Boolean analysis, the two-level configuration is analyzed for common logic terms. Redundant logic terms are eliminated at each level. Often, logic blocks can be eliminated as well. Generally, when the logic network is configured in two levels, huge logic blocks (e.g., OR or AND logic blocks) with many inputs are derived.
During the layout phase of the manufacture of an integrated circuit (IC), the logic blocks may need to be expanded into smaller logic blocks because of time and space requirements. Expansion is required in order to provide for the physical placement of logic blocks and their interconnections on the IC. When expansion is performed, redundancies are typically reintroduced. Optimization using Boolean algebra and algebraic factoring is performed as expansion commences so as to generate streamlined logic locally to where the logic is needed or positioned on an IC chip.
It should be noted that some logic networks, or at least parts thereof, cannot be translated into two levels of logic. An example of this type of logic is that which comprises feedback loops. In such networks, Boolean analysis of the network is performed to the extent possible. Hence, a complete and thorough minimization of the network cannot be accomplished.
Finally, the logic network is implemented in hardware on an IC, based upon the streamlined version of the logic network derived from the minimization process. The logic network will have the same functionality as envisioned by the designer, but will have a different composition of logic blocks as a result of the simplification process.
After implementation in an IC, test generation processes are performed on the logic network in order to test the integrity of the IC and also the manufacturing process. The use of a "D-algorithm" (short for Defect algorithm) for test generation is well known in the art. Many versions of D-algorithms exist in the industry. D-algorithms are described in J. P. Roth, "Minimization by the D Algorithm," IEEE Transactions on Computers, Vol. C-35, No. 5, pp. 476-478, May 1986.
The test generation process using the D-algorithm proceeds as follows. Combinations of inputs and input states are generated and passed through the logic network. The results at the outputs are then observed and analyzed. The expected response of the network is known and is compared to the experimental results.
Essentially, during the implementation of the D-algorithm, defects, or faults, in manufacturing are identified. Faults can cause "stuck at" problems ("stuck at faults"), which are well known in the art. Generally, the "stuck at" concept refers to the condition when an improper logic state exists at a node due to improper manufacture. As an example, consider when an AND logic block in a logic network has an input which is always maintained, or "stuck," at a logic low due to a defect, then the output will always be at a logic low because of the controlling, stuck input. Consequently, the IC is defective and should be discarded.
However, the foregoing approach to test generation is burdensome. Execution of the D-algorithm requires an undesirably enormous amount of time, oftentimes hours or days, when the logic networks have many levels of logic blocks. Typical logic networks can have, for example, fifteen to twenty levels of logic. Thus, the number of unique inputs and the number of observable points from the point of a fault are extraordinarily numerous. Consequently, testing the logic paths consumes an undesirable amount of time.
Furthermore, using test generation to identify and remove redundancies is even more undesirable. Each time that the logic at issue is modified, via removal of a redundancy, the test generation process must be commenced again from the start for all faults in the logic.