The semiconductor industry is continuously making improvements in the achievable density of very large-scale integrated (VLSI) circuits. In order to benefit of these available levels of integration, design methodologies and techniques continue to manage the increased complexity inherent in such large chips. One such emerging methodology is system-on-chip (SoC) design, wherein predesigned and preverified blocks, often called intellectual property blocks or “IPs” are combined on a single chip. A library of reusable IP blocks with various timing, area and power configuration is a key to success of SoC integration success, as the SoC integrator can apply the trade-offs that best match the needs of the target application.
Digital design is implemented using a well-defined top-down design methodology, implemented into various computerized methods for automation of implementing and integrating numerous digital IP blocks into a large scale integrated circuit.
However, analog/mixed-signal (AMS) traditionally tends to follow an ad hoc custom design process, with few or no possibilities of reusing existing IP blocks. Also, when analog and digital blocks coexist on the same substrate, the analog portion can be more time-consuming to develop even though it may represent a smaller percentage of the chip area.
As a matter of fact, nonlinear unitary devices are very complex to model when used in an analog context, as their behavior is directly and immediately impacted by their unitary dimensions, such as the width (W) or length (L) of the physical transistors.
Until recently, capitalization of knowledge in analog circuits has been mainly limited to libraries of netlists, which describe structure rather than behavior.
Knowledge capitalization has been introduced in some tools through hard-coding of sizing and biasing plans for operating point computation, such as in OASYS, described by Harjani et al. in “OASYS: A framework for analog circuit synthesis. IEEE Trans. Computer-Aided Design, 8(12):1247-1266, December 1989”. However, such capitalization stands mainly on the human designer know-how and is deemed both a fastidious and very long work, while insufficiently efficient for automation.
As computer-based SPICE-like simulators became available for predicting the behavior of a given sized unitary device, computerized optimizers were designed around 1980. Based on a given netlist, such an optimizer essentially run through successive simulations of the behavior of a circuit for different sets of dimensions. Convergence is achieved by a step-by-step variation of unitary dimensions, while evaluating cost or performance functions to reach an “optimized” solution. Such technology is used in the MAELSTROM software, described in Krasnicki et al.: “MAELSTROM: Efficient simulation-based synthesis for custom analog cells. Proc. Design Automation Conf., pages 945-950, June 1999>>.
Document U.S. Pat. No. 7,516,423 to De Smedt, entitled “Method and apparatus for designing electronic circuits using optimization”, also proposes a design method based on formulating an optimization problem and using an evolutionary process to produce a solution. This document proposes to define an optimization problem comprising: design objectives, constraint rules, and constraint handling mechanisms. The proposed optimization process comprises: providing at least one set of candidate solutions; evaluating said objective for said set; and selecting at least one subset from said set based at least in part on said act of evaluating. This problem is proposed to be optimized using an “evolutionary” process to produce a solution. This evaluating process is proposed as an “evolutionary” optimization process.
United States Patent Application Publication No. 2003/0009729 to Rodney Phelps et al., published Jan. 18, 2002 and entitled “Method for Automatically Sizing and Biasing Circuits”, and similarly “ASF: A Practical Simulation-Based Methodology for the Synthesis of Custom Analog Circuits”, M. Krasnicki et al., Proceedings of the International Conference on Computer-Aided Design, pp. 350-357, November 2001, describe a framework used to size a circuit topology towards a given set of performance specifications. Performances are evaluated based on circuit simulations. Solutions from previous optimization sessions are stored in a database. Information stored in this database steers the optimization engine in finding a solution to the design problem in a reduced amount of time. The way previous or intermediate solutions are stored in this database does not guarantee, however, that this set of candidate solutions covers a relevant portion of the performance space. Hybrid optimization techniques are applied where advanced “simulated annealing” techniques are combined with “hill-climbing” techniques. The optimization algorithm itself is not able to construct knowledge of the search space based on its only experience.
However, as a difference with the digital context, design parameter ranges, especially for sizes, are very large and have a direct impact on analog behavior of every unitary device, even for a very small circuit. As an example, a circuit of five transistors, if evaluated for 10 000 steps of one dimension, will give over 1020 possible dimensions to be simulated, many of which are technically not realistic nor interesting, which makes optimizing larger scale circuits very difficult if not impossible.
Optimizing thus need to make human heuristic hypothesis, that may easily lead to missing many interesting solutions.
Direct Current Solver
Present inventors have built a conceptual design method that links the system level down to the technology level into a design abstraction model, and is implemented as an automated circuit design method. This design method is more precisely detailed in the phD thesis of M. R. Iskander, presented on Jul. 2 2008 in Paris, “Knowledge-aware synthesis for analog integrated circuit design and reuse”, which is hereby included by reference.
As illustrated in FIG. 1, an analog circuit may be represented as a hierarchy of one or several modules levels, in which each level of hierarchy is only linked with its respective direct parent level and direct child level. Within this hierarchy, the lowest levels include “elementary devices”, each being made of one or several transistors.
Throughout the hierarchy of a circuit, functional design parameters are used from the circuit level, and propagated through each level down to the transistor level.
At transistor level, the design method then implements a sizing and biasing method: with a given value fixed by the designer or computed from the parent level for some electrical and/or dimensional parameters of the transistor, remaining parameters are computed through memorized mathematical operators. As an example, an operator called OPVS(Veg, VB),                is noted: “Temp, ID, L, Veg, VD, VG, VB(VS, Vth, W)”;        uses parameters: temperature (Temp), drain current (ID), length (L), overdrive gate voltage (Veg), drain voltage (VD), gate voltage (VG) and bulk voltage (VB); and        provides parameters: source voltage (VS), threshold voltage (Vth), and width (W).        
These operators are typically obtained through inversion of a functional model existing for this specific transistor in the concerned technology, such as inverting the BSIM3v3, BSIM4, PSP or EKV model equations. Successive iterations of such computing may then be brought into a convergence to provide a numerical solution for transistor level, typically using Newton Raphson resolution algorithm.
This circuit design method uses a dependency graph system for linking together the different variables of a module of any level to other variables of other modules as well as device parameters (e.g. dimensions and biasing voltages of the physical transistor) and design parameters (e.g. current and overdrive gate voltage and length in chosen points) for the designed circuit.
In FIG. 2 is illustrated an example of a CMOS circuit for a simple OTA amplifier which includes three elementary devices (encircled in dashed lines), namely: a current mirror CM with two transistors M3 and M4, a differential pair DP with two transistors M1 and M2, and one single transistor M5 that is seen as an elementary device in itself.
At transistor level, the dependency graph links together external imposed electrical parameters (e.g. current and overdrive gate voltage and length in chosen points) together with internal parameters (including dimensions and biasing voltages of the physical transistor). Such a graph is illustrated in FIG. 3 applied to the parameters of transistor M3.
In this graph system, variables are seen as circle nodes that are linked with another set of rectangle nodes called operators, forming a bipartite graph representation having two disjoints sets of nodes: circle nodes for variables and rectangle nodes for operators. The bipartite graph represent the structural dependencies between variables and operators. A variable, such as overdrive gate voltage Veg of transistor M3, is linked as an input parameter to the operator OPVGD(Veg). Five input arcs are incident to the operator OPVGD(Veg) and four output arcs come from the operator. The mathematical operator OPVGD(Veg) is represented by the rectangle node and is connected to the input and output arcs that are themselves connected to the related input and output variables.
Dependency graphs of more complex devices including several transistors are built similarly, such as the current mirror CM, as illustrated in FIG. 4. In this graph, it is assumed that transistor M4 is chosen with dimensional proportions identical to transistor M3, and the “x1” rectangle node means that a scale factor of “1” is chosen between M3 and M4 dimensions (here for width “W”).
As illustrated in FIG. 5 for the simple OTA amplifier, a design view of a circuit may thus be built as a global dependency graph through a bottom-up approach across the hierarchy of this circuit, together with various constraints or technical choices (first input row) fixed by the designer.
This design method includes an input and memorization of different modules, including their dependency graphs and operators. Memorized graphs for known modules are coded or may be generated, and are then stored in a programming language, for example as a “generator” within the CAIRO+/CHAMS design environment. Such stored modules, be it elementary devices as well as intermediate or circuit level modules, may then be individually tested, evaluated and corrected, so as to be furtherly reused as “analog IP blocks”.
For any kind of stored module, be an elementary device or a more complex circuit, its associated dependency graph may then be retrieved and automatically compiled and top-down browsed with respect of the defined hierarchy (similarly to FIG. 1), so as to start from initial designer choices and provide final transistor level data, such as dimensions (typically width W and length L) and electrical local data (e.g. terminal voltages such as VG or VS or VG/D).
As illustrated in FIG. 6, this sizing and biasing operation is included as an automated design step between a vector V1 of external and functional parameters (i.e. circuit parameters) and a vector V2 of internal parameters (i.e. biases and sizes parameters). This step comprises a computer automation evaluating the Design View graph (of FIG. 5) of the circuit, to mathematically derive the sizes and biases parameters V2 from the design parameters V1. This step somehow enables to use equations that are structured according to the parameters that are actually meaningful to the designer.
This vector V2 is then fed to a simulator using a transistor model, to provide a vector V3 representing the local behavior of the transistor around its operating point (including small signals parameters such as Transconductance gm, Output Conductance gds, etc.). This vector V3 is then used to evaluate the functional results (i.e. performances) in a set of performance data V4.
This design method thus enables to introduce such sizes and biases (vector V2) within an optimization loop L2 based on the evaluated performances together with cost function parameters fixed by the designer.
Furtherly, this method also enables to introduce selection of design parameters V1 within an optimization loop L1. An important automation gap existing in the previous design tools is thus filled through use of the Design View graph evaluating, thus enabling the circuit designer to input directly his functional parameters V1 into the design tool, under a form greatly more significant and more intuitive for him. The design tool thus enables him obtain more value from his technical experience and know-how in much shorter design time.
Once implemented within a computerized design environment tool, currently as a CAIRO+/CHAMS library of generators, this method provides a library of reusable software components which may correspond to an IP block.
As illustrated in FIG. 7, such a circuit generator receives as input data the electrical and physical parameters of the manufacturing process, as well as specification values. It provides as output data a dimensioned netlist, a performance list, and the drawings of the related layout masks. Dependency graph of a new circuit which is constituted with several known analog IP blocks may also be automatically computed by retrieving dependency graphs of these blocks and combining them into a new global graph, thus building a new generator for the new circuit.
Limits of Direct Current Analysis
Currently, this design method enables only to use the static behavior of components in order to optimize the sizing and biasing. Only direct current behavior is taken into account: signals are only seen as small signals around a Direct Current Operating Point.
Thus, in many practical electronic circuits, such as in most of the prior art Direct Current Operating Point methods, a constant steady state resulting from DC excitations or having a very small variations around a DC operating point are the most commonly used simulation mode.
Optimizing and designing of analog circuits may thus give a less precise or relevant result, for example when it comes to large/quick signal circuits and/or with unstable or long duration variations.
When the designed system performance relies on instantaneous signal variations, simulation is usually implemented with impressing a reference signal as a static state by the energy sources. This reference signal is then used to provide a background for the time-varying signals.
In the case of small-signal Alternating Current operation, signals vary slightly in the vicinity of the bias.
Therefore, if small incremental signals are considered, the behavior of the nonlinear circuit depends not only on its topology and the character of the branches but also on the bias impressed. Thus, the design process for analog integrated circuits starts with the D.C. analysis and verification of the D.C. signals.
Similarly, systems operating with slow time-varying signals of a large dynamic range covering a wide range of nonlinear characteristics of the system elements, require a sequence of D.C. simulations with an excitation changing gradually. Such methods, called D.C. sweep or multipoint D.C. analysis, examine the dependence of responses on the excitations, assuming that the latter vary slowly enough to neglect reactive effects. Such an analysis is useful for analog and digital circuits with slow time-varying signals.
However, when the reactive effects of generally nonlinear components such as capacitors, inductors, diodes, MOS transistors, etc. are taken into account with simulation time, direct current solvers become insufficient, even in a multipoint sequence.
Therefore, there is a need for nonlinear analysis that could also provide a precise and relevant analysis and simulation of the behavior of an analog circuit which is operating under transient conditions, and possibly an automated optimization under such transient conditions.
An object of the present invention is to improve the performance, precision and applicability of the current tools, and especially to enable a more efficient, precise and relevant analysis, simulation and optimization for nonlinear transient configurations, within such a design assistance method.