Mixed-signal integrated circuits (ICs) that include both analog and digital devices represent a significant and growing segment of the semiconductor market. Mixed-signal design content and design complexity is also growing concurrently with the growth in the overall number of mixed-signal designs. This is driven partly by a need to apply digital correction techniques to analog circuits in large system-on-chip ICs (built in 90 nm and lower process nodes), and partly by the need to support ever-increasing bandwidth and functionality requirements of ICs that enable wireline/wireless communications and consumer electronics products. From the electrical design point of view, the connectivity between analog circuits and digital circuits is evolving from loosely-coupled interfaces to tightly-coupled micro-architectural and circuit level integration within mixed-signal blocks. Consumer electronics products contain significant amount of mixed-signal designs as the inputs are mostly analog (human interaction), while storage is in digital format (flash memories, etc).
Both analog and digital designs have always relied on simulators for functional verification. Traditionally, the simulation techniques applied to analyze and to verify the analog and digital blocks within ICs were handled separately. However, current approaches to addressing mixed-signal simulation fall short of mixed-signal analysis and verification requirements. This is mainly because these technologies, many of which have been in existence for more than 15 years, are being retrofitted for modern mixed-signal designs. The following techniques have been used in the past, but are no longer adequate: (1) Simulation Program with Integrated Circuit Emphasis (SPICE) simulation using sparse matrix solvers, (2) FastSpice simulation using multi-rate algorithms, and (3) a hybrid approach.
SPICE simulation involves reading a transistor level circuit and building a flat matrix with all the parameters. This matrix is solved for DC analysis and for each time step of a transient analysis. As a result, the simulator creates an instantaneous model for the whole circuit at each time step using just the device models. This technique is very accurate because (1) it correctly solves Kirchhoff's Current Law (KCL) and Kirchhoff's Voltage Law (KVL), which are the fundamental laws governing all electrical networks, (2) it correctly models circuit behavior by using the full device models which have been calibrated against the silicon foundries manufacturing capabilities, and (3) it models the chip as one complete system.
FIG. 1 presents a process 100 performed by a typical SPICE simulation implementation to simulate a design. The process 100 begins by receiving (at 110) a design for simulation. The process performs DC analysis (at 120) over the entire design. Next, the process performs transient analysis of the entire design over a series to ascending time steps. The process selects (at 130) a time step and for each time step of the transient analysis, the process computes (at 140) a state for a non-linear matrix that mathematically models the entire design. The process linearizes (at 150) the matrix such that it can be solved (at 160) in one or more iterations. The process then repeats (at 170) for each time step of the transient analysis.
As the circuit size increases, the simulation time for the process 100 and other such simulation processes or techniques increases super-linearly (i.e., exponentially). This puts a practical limit on simulation capacity using this approach. Recently, several companies have introduced improved offerings based on conventional SPICE engines with more efficient algorithms or engines that use parallelization. While these products have delivered moderate (e.g., 5-10×) increase in simulation speeds and modest improvement in overall capacity, they fall short of adequately meeting the challenges of large mixed-signal designs.
In FastSpice simulation, the circuit is divided into multiple partitions and each partition is simulated with its own time step (asynchronously) using simplified device models. These models can be voltage-based tables, current-based tables, analytical macro-models. Such simulators are often very fast and capable of handling large circuits. However, these simulations suffer from accuracy degradation because of a manual setting of accuracy-tolerance, a fundamental reliance on macro-models to approximately represent circuit behavior, a failure to adhere to Kirchhoff's Current Law (KCL) and Kirchhoff's Voltage Law (KVL), and a dependence on event-based simulation.
Specifically, designers can set the desired accuracy level for their simulations in order to trade-off accuracy against speed. Not understanding these settings properly can result in invalid results. Additionally, these simulators can only produce accuracy up to 5% of SPICE. This is due to the simulator's fundamental reliance on macro-models to approximately represent circuit behavior. Such reliance results in a trade-off between (1) nominal and corner case accuracy and (2) simulation speed. The trade-off is a result of the macro-models that are either table lookup or algorithmic simplifications to the full device models provided by silicon foundries. Also, by not solving Kirchhoff's Current Law (KCL) and Kirchhoff's Voltage Law (KVL), the simulators do not adhere to the fundamental circuit simulation accuracy requirement of charge conservation. Lastly, the dependence on event-based simulation further detrimentally affects accuracy of the simulation as event propagation across partition boundaries is only triggered when there is significant change in voltage.
Some designers get around the accuracy issues in FastSpice and the capacity issues in SPICE by using a hybrid approach. To do so, IC designers manually select the partitions in which they make accuracy-capacity trade-offs, based on prior experience and judgment. The problem with this approach is that designers are required to pre judge and hence bias the simulation, which can result in design error escapes. These errors are especially insidious at the digital-analog interfaces. This approach runs counter to simulation requirements which are becoming increasingly more stringent to address the growth in size, complexity, and performance of mixed-signal designs and adds risk to the critical design analysis and verification process. Such risks can contribute to silicon malfunction and thus the need and added cost for silicon respins to correct the design.
In summary, each of the above described techniques and other existing electrical circuit simulation techniques has a specific scope of application in terms of speed and accuracy of results. Additionally, each of these techniques has a capacity limit that restricts the size of the design that can be simulated efficiently. The speed of the simulations and the accuracy of their results can be traded-off by limiting the number of iterations or by using different simulation engines. These tradeoffs render the conventional simulators and techniques inadequate for today's mixed-signal designs and upcoming mixed-signal design requirements. Accordingly, the requirements for mixed-signal simulators are growing apace for the following reasons:
First, mixed-signal designs tend to be larger than pure analog designs. Hence mixed-signal simulators must be able to handle these large designs, of up to 10M elements or more, without requiring a trade-off between capacity and throughput/performance.
Second, mixed-signal designs require higher accuracy than purely digital designs. Purely digital designs often have circuits with one of two states (e.g., 0 or 1) whereas analog circuits may have several different states. The higher accuracy for analog circuits is needed in order to indentify and address complex effects, like voltage changes and timing, resulting from dynamically changing operational conditions which affect device functionality and yield. For instance, a 14-bit sample-and-hold circuit (e.g., an analog to digital converter (ADC)) requires voltage accuracy on the order of one micro-Volt. Additionally, this sample-and-hold circuit requires high accuracy in order to model settling error correctly which, if ignored, can easily swamp out the lower order bits.
Third, mixed-signal designs tend to have coupling between various physical components. Substrate noise, electromagnetic coupling and inductive coupling effects are a few manifestations of such physical interaction between integrated circuit elements. These effects must be simulated accurately with post-layout parasitics annotated to the netlist/schematic to ensure that the circuit works correctly under nominal and corner case conditions, thereby dramatically increasing the capacity requirements of the simulator.
Fourth, mixed-signal designs have very tight variation tolerances across operating modes. This means that a large number of simulations need to be run across multiple modes, with high accuracy. In other words, the chip needs to be simulated under more conditions than a purely analog or digital chip.
Fifth, mixed-signal designs are more sensitive to variation in process, voltage, and temperature (PVT) corners. These variations must be simulated accurately to ensure that the entire chip works correctly across the operational and manufacturing process range.
These issues have profound implications on the design of mixed-signal chips. Designers would like to run an adequate number of simulations to ensure that the necessary analysis and verification coverage of the design-space, environment-space, and manufacturing-space is achieved prior to tapeout. With conventional techniques and/or simulators, it is not feasible to run all these techniques and/or simulations at the required accuracy and achieve adequate coverage within practical schedule constraints. This constraint forces IC designers to trade-off the level of coverage achieved before tapeout with the overall development schedule, contributing to the risk of device malfunction, reduced yield and silicon respins. Such delays can jeopardize the economic return on the IC as the end-product. These delays result in an increased risk of missing a target window or an increased risk of shipping an incorrectly-functioning IC.
Therefore, all of the above approaches provide sub-optimal results for present day mixed-signal simulation because they inherently trade-off accuracy, capacity, and performance. Accordingly, a new method and system is required that simulates a circuit design while delivering performance and capacity without compromising on accuracy.