1. Field of the Invention
This invention relates to a method and system for providing optimal tuning for complex simulators.
2. Brief Description of the Prior Art
In the fabrication of semiconductor devices, there are many standard steps required to be performed to fabricate the completed device, such as, for example, doping, deposition, fabrication of the various metal and/or semiconductor layer and other steps. While the physical phenomena involved in many of these steps are well understood, this is not the case for all of the physical phenomena involved, some of the physical phenomena involved not being so well understood.
With the ever-increasing cost of manufacturing full-flow semiconductor devices, there is a great effort to reduce the number of semiconductor wafers used in developing new technologies, one such effort involving performance of process and device simulations to replace fabrication of the semiconductor wafers themselves. However, in order for this method to be useful, the simulations must accurately predict the actual results that would occur if the devices were to be manufactured and measured.
Furthermore, both the process and device simulators of the prior art have several deficiencies that result in poor predictive capability with the data being approximately 2.5 orders of magnitude below the simulation data, such results being clearly unacceptable for prediction purposes. As a result of this type of inaccuracy, simulation has been relegated to providing xe2x80x9ctrendxe2x80x9d information that is used to aid in the design of experiments on real wafers instead of actually replacing fabrication of those wafers.
For this reason, complex simulators, such as semiconductor process and device simulators, often have to be xe2x80x9ctunedxe2x80x9d so that they can predict real-world data. The reasons for this are that many of the physical constants that the simulators use are not known exactly a priori and that the simulators often do not capture all of the physical phenomena involved. Tuning of these simulators requires determination of the correct values for the tunable parameters in the simulators such that the simulator can predict real-world data. Since each run of the simulator is expensive in terms of time and resources, the number of runs required to tune the simulators must be minimized. Additional problems arise from the presence of multiple objectives in the tuning (which gives rise to nonlinear objective functions). Known methodologies for tuning generally involve optimizing the tuning parameters by directly running the simulator (i.e., the simulator is in the optimization loop) The problems resulting from the prior art of the type described above are that the nonlinear objective functions lead to local minima, the expense of running simulations inhibits the use of global optimizers that can escape from local minima, advantage cannot be taken of job farming and multiple evaluations of the gradient are required (expensive for high dimensions).
xe2x80x9cThe most basic prior art system places the simulator in the loop along with the optimizer with the objective function being derived by operation on the output of the simulator and experimental data such as is shown in FIG. 1. There is shown typical prior art simulating circuitry which is a closed loop containing an objective function device 1, typically a summing amplifier, an optimizer 3 which is typically a gain stage and a simulators 5 which is an unknown circuit. The output of the simulator 5 and experimental data are both fed to the objective function device 1 which provides an error signal at its output, such as by sum squared difference or other appropriate well known procedure. This output is optimized by the optimizer 3, in well known manner, and fed to the simulator 5. The simulator 5 utilizes the output of the optimizer 3 in its circuit to provide the output which is fed to the object function device 1. The output of the simulator is altered by altering one or more of the parameters measured therein when there is an error until the error is zero. The simulations require a great deal of time. Problems with this prior art system are that the operation is serial in nature, requires an unknown number of simulations which affect speed and the quality of the solution depends upon the starting point with the system, on occasion, not finding a solution at all. It follows that the prior art system is not very thorough and it is apparent that a system which will provide the same result on a more rapid basis is highly desirable.xe2x80x9d
A further prior art system replaces both the simulator and the objective function with a response-surface model (RSM) and is shown in FIG. 2 with optimization taking place as shown in FIG. 3. This system is described in a paper of Gilles Le Carval et al. entitled Methodology For Predictive Calibration of TCAD Simulators, Proceedings of 1997 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD), pp. 177-180 (1997), sponsored by IEEE, the contents of which are incorporated herein by reference, wherein there is a proposal to obtain a set of calibrated model parameters for predictive simulations. This methodology associates Design of Experiments (DOE) with Response Surface Method (RSM) and also advanced concepts of statistical analysis: D-optimal filtering and Taguchi""s method. It has the characteristics of insensitivity to process conditions, optimal use of existing experimental results, rigorous statistical analysis of the data and clever selection of the model parameters. A problem with this system is that the method attempts to model the objective function (based upon those outputs and the experimental data) instead of modeling the functions themselves. In order to calculate the error signal, the square root of the sum of the squares of the target value (a low frequency signal) of each response minus the measured value must be calculated, this providing a very complex (high frequency) result. The prior art has attempted to model this complex result directly. It follow that when this complex result is compared with the experimental data received, which is much less complex (low frequency), a great deal of data is lost with the concomitant results of such data loss. By combining the simulator and data into the RSM, statistical analysis of tuning is not possible as can be demonstrated since, in general, the amount of noise in the coefficients can not be determined by the prior art and especially by the system and method described by Carvel et al.
In accordance with the present invention, the error is made to be the objective function of the RSM of the inputs rather than having the error signal being the RSM of the inputs or parameters as in the Le Carval method described above. Therefore, the RSMs, which replace the simulator, are the inputs that are received from the simulator, thereby retaining the high frequency behavior much more accurately and making the system more extensible in that, if the experimental data is now changed, such as tuning to another set of data over the same region of parameters, the experimental data need merely be replaced with subsequent recalculation. Furthermore, the RSMs in accordance with the present invention are independent of the experimental data whereas the prior art RSMs are dependent upon the experimental data. Therefore, the prior art must rebuild its models with any change in experimental data. Accordingly, the system in accordance with the present is more accurate than are prior art systems and independent of experimental data. In addition, the methodology in accordance with the present invention has broader applicability due to modeling the low frequency outputs of the simulator rather than the high frequency objective function.
Briefly, there is provided a general coherent methodology for tuning simulators to experimental data. With this methodology, excellent matches between simulated and experimental data are achieved to within one percent. The initial phase is to build an RSM model, this including de signing an experiment wherein there are provided the points at which to tune and the ranges for the parameter settings. A simulation experiment is then run and relevant information is extracted from the simulated experiment. The extracted information is used to build an RSM model or models. This RSM model. or models is now used in an optimization function without a simulator wherein experimental data is provided and targets are extracted from the experimental data. An objective function is provided as a function of the targets extracted from the data and the output(s) of the RSM(s) and provided to an optimizer which then updates the RSM(s).
To facilitate the use of this tuning technology, a software tool is provided that implements the methodology in an easy to use interface. This tool allows the user to specify a simulation block that controls the simulator and an extraction block that extracts relevant information from the simulations and the experimental data. The simulation and extraction blocks must, concurrently, be design of experiments (DOE)/Optimization (Opt) blocks of the type demonstrated by Duane S. Boning and P. K. Mozumder entitled DOE/Opt: A System For Design of Experiments, Response Surface Modeling, and Optimization Using Process and Device Simulation, IEEE Transactions on Semiconductor Manufacturing, 7(2):233-245, May, 1994. DOE/Opt is a simulation control interface that is used for running simulation experiments. This reference is incorporated herein by reference. Since design engineers already use DOE for running simulations, it is a simple step to make them work in conjunction with the above described tool in accordance with the present invention.
The tool, which is software as stated above, implements the tuning methodology in an easy to use interface. The tool uses engineer-specified DOE/Opt blocks to implement the simulations and extractions as shown in FIG. 1 and a set of routines to perform the modeling and optimization operations. For each new problem, the tool must be configured with the new simulation block, extraction block and variable definitions. Once this configuration is complete, the user can then interactively control the tuning process i.e., (which variables to tune with, which ranges to use, which outputs to use as targets, etc.). Additionally, the user is also given control of the tuning loop. For example, the complete tuning loop is the generation of the design, execution of the simulations, extraction of the data, modeling of the responses and optimization. Through interface options, the user can either command the tool to perform all of these tasks or any of the tasks individually.
The software tool is a TCL/TK program that runs, for example, under the UNIX environment and requires the presence of the DOE/Opt software as well as the Splus statistical analysis environment and is set forth in a reference of Richard A. Becker and John M. Chambers and Allan R. Wilks, The New S Language: A Programming Environment for Data Analysis and Graphics, Wadsworth and Brooks/Cole, 1988, the contents of which are incorporated herein by reference. The software tool performs the model building and optimization shown within the dotted lines in FIG. 4.
There are four classes of inputs used by the software tool. These inputs are (1) tuning variables, (2) fixed variables, (3) range variables and (4) control variables.
Tuning variables control the behavior of the simulator outputs over the range of the range variable and are the input variables that are used to tune the simulator to the experimental data. For example, in the Vt tuning example herein, the Vt roll-off behavior is exhibited over the range of gate lengths. The exact nature of this roll-off behavior is controlled by the tuning parameters. Thus, the tuning parameters are adjusted to match the roll-off behavior as a function of gate length. In order to avoid making assumptions about the nature of the simulator outputs as a function of the range variable, only specific values of the range variable are simulated and modeled. No attempt is made to produce models as a function of the range variable. When the simulator is tuned, a simulation experiment is designed and executed using these variables. Examples of tuning variables are Tox and Npoly in CV tuning and LDDchar and PocketChar in IV tuning. The most important information for tuning variables is the range over which these variables are to be tuned. These ranges are set by the engineer to reflect an initial guess at the value of each parameter. If tuning requires more than one iteration, then the engineer uses the results from one tuning iteration to update the ranges for subsequent iterations. At the end of each iteration, the toot updates the default value of the tuned variable with the new optimized value.
Fixed variables are those variables that are necessary for the execution of the simulation, but are not tuned. For example, the upper and lower sweep limits for the gate voltage in a capacitance-voltage simulation may be variables. However, these variables are fixed for any given tuning problem. Therefore, the tool places these variables in a separate table that can be accessed and then hidden. This capability makes the interface simple and easy to understand.
A range variable is the xe2x80x9cxxe2x80x9d variable over which the experimental data are varied. For example, when tuning a 1-D process simulator, the range variable is depth into the silicon. When tuning simulation parameters to match a CV curve, the range variable is the gate voltage. When tuning simulation parameters to match an IV curve, the range parameter can be gate length. There are two types of range variable, explicit and implicit. An explicit range variable is one which requires a separate simulation for each value that the variable takes on. For example, in the IV tuning example mentioned above, changing the gate length requires a new simulation. Thus , gate length is a n explicit range variable. An implicit range variable is one for which all values are simulated in a single run of the simulator. For example, in a CV simulation, all values of gate voltage are simulated (at discrete intervals) during one simulation. Therefore, if a number of gate voltages are specified for simulation, they are all performed together in a single simulation . A transformation for the range variable can also be specified during configuration of the tool. Since a separate response surface model (RSM) is generated for each value of range variable, this transformation does not affect the modeling of the simulator. However, the transformation can make visualization simpler. For example, in tuning simulation parameters across gate length, it is advantageous to plot 1/Lgate instead of Lgate due to the rapid change in device performance as gate length is reduced below 0.7 micrometers. Thus, if gate length is the range variable, the reciprocal transformation is specified.
If a variable has a different value in the simulation block and extraction block, then it is a control variable. This type of variable is often used to enable separate tasks within a single DOE block. Thus, the simulation and extraction blocks can be the same block with different functionality, depending on the value of the control variable. The tool allows a control variable to be specified with separate values for the simulation and extraction actions. The tool updates the value of the control variable, depending on the desired functionality before executing the block. In this manner, the engineer can write all of the desired functionality into a single DOE block.
In the prior art, no provision is made for the use of noisy data. However, if tuning is to xe2x80x9cstate of the artxe2x80x9d processes, the data are often noisy and incomplete. Therefore, instead of tuning directly to the data, tuning is to the expected value of the data at the values of the range variable that the engineer specifies (for example, gate lengths of 0.16, 0.18 and 0.25 micrometers). The expected value is obtained by fitting a spline model (see Green and Silverman, Non-parametric Regression and Generalized Linear Models, Chapman and Hall, 1994, the contents of which are incorporated herein by reference) to each output as a function of the range variable. The spline is used because it does not make any assumptions about the behavior of the output as a function of the range variable, thus making the method very general. This model is then used to predict the expected value of the experimental data at the points specified by the engineer (which may not have been exactly present in the data). These predicted values are then used as the targets for optimization and the spline models are discarded. These values can be used to estimate the noise in the tuning parameters.
In accordance with the present invention, each response of the simulator is captured and response-surface models (RSMs) are then built relating these responses to the tuning parameters. An objective function is constructed from the outputs of the RSMs (sum of squared errors, etc.). Global optimization is performed over the tuning parameter space using the constructed objective function. Advantages of the invention are that the simulator is used only to build RSMs, thereby reducing the number of simulations and resulting in an absolute best answer within a given search space. There is also an indication if the best solution is outside of the given search space. The system in accordance with the present invention can be used to tune process simulators and to tune parameters for any complex simulator for which simulation time must be minimized.
Advantage provided in accordance with the present invention are (1) reduction in the number of simulations as compared with the prior art, (2) advantage is taken of job farming, (3) there is a thorough search of the tuning space, (4) the ability to tune to multiple data sets and (5) the ability to tune to multiple data sets allows performance of statistical analysis of the tuning procedure to estimate the uncertainty in the tuned parameter values.
It follows that a software tool has been provided that facilitates the execution of the tuning algorithm. The software implements the portions of FIG. 4 shown in dotted boxes with the remainder of the algorithm being the set up of the software and the evaluation of solutions by the engineer. Thus, the engineer is required in the loop.