1. Field of the Invention
The present invention relates a system and method for producing product property estimates for closed-loop control processes and, more particularly, to a system and method for estimating process stream compositions and/or product properties from process models, and for improving those estimates, and the application of the improved estimates to control of the process.
2. Discussion of Related Art
Chemical, petrochemical, and polymer products are usually manufactured through a series of operations. Each of these operations is performed by equipment especially designed to achieve specified exit stream compositions and/or product properties. The entire sequence of operations result in the production of one or more product streams which must meet prescribed stream composition and/or product property specifications. Even byproduct or waste streams must meet specifications that generally limit the amount of useful product purged.
Industry has been very successful in automating the equipment regulatory control function. Control of pressure, temperature, level and flow produces stable and predictable operation of most process operations and can be most effectively applied through use of a modern distributed control system.
However, control of stream composition and/or product properties (stream quality measures) is still largely a manual operation that is performed by the operator after his evaluation of analyzer or laboratory results. Attempts to automate stream composition and/or product property control using laboratory or processors analyzers have been largely unsuccessful due to reliability and accuracy limitations, or because of cost factors.
Conventional techniques have taught that these problems may be overcome to a large extent by utilizing computer models of the process to compute stream quality Values, and then using the computed values for closed loop control. Computer models are relatively inexpensive, highly reliable, and can be run frequently, so that time delays are of little consequence. When properly designed, they require little maintenance.
Process computer models operating in parallel with the actual process can convert feed composition measurements, and pressure, temperature, level, and flow measurements for a particular operation into exit stream composition and product property estimates.
There are several requirements imposed by the use of process models:
(1) The accuracy and reliability of the input process variables to the process models (feed composition, pressure, temperature, level, and flow) must be assured. Erroneous input values may produce significant errors in the output results, thus rendering them useless.
(2) The estimates of stream composition and/or product properties produced by the process model must remain in reasonable agreement with calibrated analyses. The calibrated analysis may be from the process laboratory or a process analyzer. If the model cannot provide reasonable estimates of stream quality, then it is of little value.
(3) Simulation of a process involving one or more operations is usually most accurately represented over wide operating ranges by detailed first principles models based on energy and material balance relationships for each of the operations. Such models may be defined in terms of several thousand differential equations (variables), and tens of thousands of algebraic relationships.
(4) When dynamic models of the process are used to simulate process operations, then the dynamic models must run faster than real time if they are to faithfully track the process.
(5) When steady-state models of the process operation are used, then the process operation must reach steady-state with respect to exit stream quality measures within a relatively short time span. Steady-state models will not accurately estimate component concentrations or product properties in process operations with long holdup times.
Both steady-state and dynamic models can be applied to separate operations within a process to produce an overall process simulation.
The use of dynamic models creates the question of how frequently to reinitialize the models with new input information. If the models are reinitialized at high frequency (e.g., several times per minute), then the time increment for the integration is kept very short. This provides very close tracking of the process, but severely limits the complexity of the model and the extent of input data reconciliation and calibration of model output because of time limitations.
On the other hand, if the frequency of reinitialization is reduced significantly (e.g., to several times each hour), then much more extensive and detailed models can be used to provide a broader and more realistic simulation of the entire process operation, although high frequency detail may be missing.
Conventional techniques emphasize reinitialization at a high frequency and integration for very short time increments using very limited models that incorporate limited or no input data reconciliation, and simple model output corrections. However, these techniques have their limitations, as outlined below.
Dynamic models used for process control are typically quite limited in size and consist in most cases of less than a dozen differential equations. These are typically designed to control a single process .or quality variable. These models are usually part of a sophisticated controller that cycles at high frequency to achieve close tracking of a process. This modeling approach isn't practical for modeling complex operations such as a multi-component distillation column, or any other process that has multiple components and requires numerous differential equations to be accurately modeled.
Large scale steady state models have been applied to processes successfully under certain conditions where the process has been close to steady state. However, steady state models are highly unreliable in predicting trace impurity levels in systems with long component response times, and would be inappropriate for simulating in real time a large scale distillation operation involving high purity streams with low impurity levels. They can also produce very poor estimates of product properties and/or stream composition during transients caused by changing operating rate or feed composition.
As discussed above, conventional techniques teach that models should incorporate all relevant elements of the process into a single model. This greatly limits the ability to model large processes. Use of several loosely coupled models to describe several operations within a process greatly increases the reliability of the individual models in tracking the process stream composition and/or product properties. This approach also allows for the use of both dynamic and steady state models to describe different operations in the process.
It has previously been suggested that only model output should be corrected and that neither model parameters nor model input should be adjusted. This may be satisfactory for small linear models, but large scale, nonlinear, multi-component models often tend to diverge without direct control of the model through adjustment of model parameters or input signals.
Conventional techniques, as illustrated by the Kalman filter design, utilize a combination of statistical filtering, alignment, and calibration that attempts to take into account all of the interactions among the state variables and expresses this in the covariance matrix. Since the number of elements in the covariance matrix is the square of the number of state variables, it is obvious that the number of state variables, and thus the number of differential equations, that can be handled is limited.
The art teaches that differences between measured and computed values are best determined as arithmetic differences. Under certain conditions these differences, if used directly to correct a model output, might actually produce negative concentrations or product property values.
Dynamic modeling of processes has previously been formulated in a way that makes it very difficult to incorporate process information several hours old into the current calculations. In some cases the approach has been to reinitialize the model at the time the sample was taken, incorporate the new information, and then recalculate the process trajectory from then till current time. This is not a practical approach when considering simulating a large process operation, such as an entire distillation refining train.