In industrial processes as petroleum refining processes, petrochemical processes, and the like, modeled predictive control is known as one multivariable control technique, as a method for controlling multi-input/multi-output dynamic systems.
Modeled predictive control was originally developed as a control technique in processes in multi-input/multi-output systems wherein manipulated variables (the variables inputted to the process) and control variables (the variables outputted from the process) settle on (asymptotically approach) target values while conforming to imposed constraints such as upper and lower limits. In addition to these, at present target values in modeled predictive control are optimized through optimization techniques such as linear programming (hereinafter termed “LP”) and quadratic programming (hereinafter termed “QP”) for target values in the steady-state of the process. See, for example, the following literatures:    Japanese Patent No. 4614536 (the “JP '536”);    Masahiro Ohshima, Modeled Predictive Control—Birth, Development, and Deployment of the Theory, Measurement and Control, Vol. 39, No. 5, pp. 321 to 325, 2000;    Teruo Ishikawa, et al. (“Ishikawa”), Method for Eliminating Bad Parameters in Modeled Predictive Control Have a Steady-State Optimization Function, Journal of Chemical Engineering, Vol. 24, No. 1, pp. 24 to 29, 1998;    S. Joe Qin, et al. (“Qin”), A Survey of Industrial Model Predictive Control Technology, Control Engineering Practice, Vol. 11, pp. 733-764, 2003; and    Jan M. Maciejowski (Shuichi Adachi and Masaaki Kanno, trans.), Modeled Predictive Control—Constraining Factors in Optimal Control Thereof, Tokyo Denki Daigaku Publishing, 2005.
An example of such modeled predictive control will be explained below. As illustrated in FIG. 15, a system for performing multivariable modeled predictive control includes a steady-state optimizing portion (optimizing device) 401 and a modeled predictive controlling portion 402 for carrying out calculations in the multivariable modeled predictive control. The steady-state optimizing portion 401 inputs an optimization evaluating function, upper and lower limit constraint values, manipulated variables, control variables, and the like, to calculate optimal target values through an optimization technique such as LP, QP, or the like.
The modeled predictive controlling portion 402 inputs the optimal target values, the control variables, and upper and lower limit values, and the like, to calculate the manipulated variables, and outputs the manipulated variables for the process 403 of the control object. The modeled predictive controlling portion 402 carries out the control calculations (the calculations for the manipulated variables) while taking in consideration upper and lower limit constraints so that the manipulated variables and control variables in the process will converge to optimal target values. The process 403 that is the control object inputs the manipulated variables from the modeled predictive controlling portion 402 and outputs the control variables. The details of specific calculations in the modeled predictive control are given in, for example, Maciejowski, so explanations thereof are omitted here.
The operation of the steady-state optimizing portion 401 will be explained next. As illustrated in FIG. 16, a data collecting portion 404 collects data that is required for predicting a future state of the process 403, data such as the control variables, the manipulated variables, noise variables, and the like, from the process 403 that is the control object. This data is sent to a future predicting portion 405. in the future predicting portion 405, future predicted values for the control variable, and the like, which serve as the basis for target value calculations, are calculated. Typically, values for the control variables at convergence are calculated assuming no change in the values that are inputted into the process 403, such as for the manipulated variables, noise variables, and the like. In other words, the values of the control variables in the steady-state, which will be arrived at if no new operations are carried out in the process 403, are predicted.
The model storing portion 406 stores a mathematical model for the process 403. Examples of mathematical models include transfer function models, state space representation models, and the like. The target value calculating portion 407, based on the future predicted value for the variable in the process 403 (which is normally a predicted variable in the steady-state), the mathematical model for the process 403, and the upper and lower limit constraints on the variables, uses LP or QP to calculate target values for optimizing the given optimization evaluating function. These values are passed to the modeled predictive controlling portion as optimal steady-state target values.
The process by which the target value calculating portion 407 determines the target values through LP or QP will be explained in greater detail next. In the below, u1, u2, . . . , um indicate values of manipulated variables, and y1, y2, . . . , yn indicate values of control variables. The number of manipulated variables is m, and the number of control variables is n. k is an index indicating the current control interval. When a set of control variables and manipulated variables is treated as a vector, they are notated as shown the Equation (1). Moreover, when the control variables and manipulated variables are treated as a single vector, then it is expressed as x, as shown in Equation (2).Expression 1u=[u1,u2, . . . ,um]T,y=[y1,y2, . . . ,yn]T  (1)x=[uT,yT]T  (2)
A method for carrying out optimization on a deviation from a base point, using, as the base point, the variable values in the steady-state, calculated by the future predicting portion 405, will be explained below (where this will be termed “differential optimization” below). For example, the JP '536 describes a method for calculating target values for the steady-state of the system that is the object of modeled predictive control, in the JP '536, and in the JP '536, differential optimization calculations are carried out. Moreover, Ishikawa argues for a method for determining the optimal target values through LP, but it is differential optimization that is used in Ishikawa as well.
Even here, this is handled primarily as differential optimization. Note that the present invention itself is not limited to differential optimization, but rather can be applied to optimization that is not differential optimization.
One example wherein differential optimization is expressed in an equation is presented. An equation is set up by the future predicting portion calculating a convergence value y(∞) as a base point, assuming that the manipulated variables have values of u(k−1) one control interval earlier, and that the control variables derive from the manipulated variables, and then using differences Δu and Δy from the base point. The relationships between u, y, Δu, Δy, u(k−1) and y(∞) will be as given in Equation (3), below.Expression 2u=u(k−1)+Δu,y=y(∞)+Δy u(k−1)=[u1(k−1),u2(k−1), . . . ,um(k−1)]T,y(∞)=[y1(∞),y2(∞), . . . ,yn(∞)]T Δu=[Δu1,Δu2, . . . ,Δum]T,Δy=[Δy1,Δy2, . . . ,Δyn]T,Δx=[ΔuT,ΔyT]   (3)
This will be explained below in accordance with FIG. 17. The variable steady-state predicted values are values for the process variables in the steady-state, calculated by the future predicting portion, corresponding to u(k−1) and y(∞) in the equation above. The upper and lower limit constraints are upper and lower limit values on the steady-state target values of the variables. Here the variable values that are underlined express lower limits, and those that are overlined express upper limits. An equation that is to be satisfied by ui (the ith manipulated variable), for example, is as follows:Expression 3ui≤ui≤ui  (4)
The constraints to which the manipulated variables and control variables must comply are determined by the steady-state variable constraint setting portion 471. Generally, constraints are set as parameters wherein each of the variables must comply with lower limits and upper limits. When differential optimization is performed, constraint equations are set up for differences from the base point, as in Equation (5), and passed to the solution calculating portion 472.Expression 4ui−ui(k−1)≤Δui≤ui−ui(k−1), for i=1, . . . ,m yi−yi(∞)≤Δyi≤yi−yi(∞), for i=1, . . . ,n   (5)
The optimization evaluating function is an evaluating function for LP or QP. Typically, it is provided as a function that is to be minimized (where if the optimization object is to be maximized, then a minimization problem can be produced through applying an evaluating function wherein the signs are reversed). In the case of differential optimization, an evaluating function J(Δx) is applied as illustrated in Equation (6), below.Expression 5J(Δx)=ΔxTHΔx+hTΔx  (6)
If here H is non-zero, then this is QP, but if zero, this is LP. Note that if the evaluating function is given as a function of x, as opposed to a function of Δx, then Equation (3) can be used to convert into the form in Equation (6).
The model that is the control object provides a model for obtaining equations giving the relationships between the manipulated variables and the control variables. If using differential optimization, then it is assumed that the magnitudes of changes Δy in the control variables are proportional to the magnitudes of changes Δu in the manipulated variables, which is typically expressed as in Equation (7), below.Expression 6Δy=G0Δu  (7)
The G0 in Equation (7) is the model that is the control object. To expand on this, it is also a gain matrix (n rows×m columns). For example, if a transfer function model that is the control object is G(s), then the gain matrix can be calculated by assuming G0=G(0).
A solution calculating portion 472 calculates Δx of so as to minimize the evaluating function of Equation (6) in a range that satisfies the parameters of Equation (5) and Equation (7). The optimal solution that is produced is a difference from the base point (u(k−1), y(∞)), so the final optimal steady-state target value can be obtained by adding this value to the base point. Note that the techniques themselves for solving this optimization problem are known technologies.
Note that the specific method for calculating in the target value calculating portion 407 is not limited to the above, but rather a different calculating process is also possible. For example, while in the above the constraints on the manipulated variables and control variables were defined through limitation to being within upper and lower limit values, there are also other methods. For example, there is a method wherein penalties that depend on the magnitude of deviation of the control variables from the upper and lower limit values are added to the evaluating function described above. In the case of this method, the control variables are allowed to deviate outside of the upper and lower limit values, but the greater the magnitude of the deviation, the greater the value of the evaluating function, thus suppressing the magnitude of the deviation.
Note that here the term “constraint” includes both a meaning wherein no deviation whatsoever outside of the upper and lower limit values or from the setting values is allowed, and a meaning wherein deviations outside of the upper and lower limit values or from the setting values are allowed, but suppressed through penalties. Moreover, the constraints are of the type wherein deviations outside of the upper and lower limit values, or from the setting values, are not allowed are termed “hard constraints” and constraints of the type where such deviations are allowed are termed “soft constraints.” In a case of soft constraints, deviations outside of the original upper and lower limit values and from the setting values are allowed, and thus the solution might not satisfy those conditions. Here these cases are included in the expression “satisfaction of constraints” but it should be noted that, in actuality, that which is satisfied is a constraint that is broader than the original “upper and lower limit values” and “setting values.”
Note that there is one problem in the method explained up until this point. This is a problem in that this method cannot be used when the process includes an integrating element.
An integrating element is an element wherein the output is proportional to a time-based integral of the input. Expressed in terms of a transfer function, this is K/s. The use of a tank will be explained as an example of a dynamic system that has an integrating element (hereinafter termed an “integrating system”). When it is assumed that the volume of a fluid within a tank is y1, the flow rate of the inflow is u1, and the flow rate of the outflow is u2, then these relationships can be expressed as in Equation (8), below. If one considers the flow rate of the inflow and the flow rate of the outflow as inputs to the system and the volume as the output, this system is an integrating system.
                    Expression        ⁢                                  ⁢        7                                                                                  y            1                    ⁡                      (            t            )                          =                                            ∫              0              t                        ⁢                                          (                                                                            u                      1                                        ⁡                                          (                                              t                        ′                                            )                                                        -                                                            u                      2                                        ⁡                                          (                                              t                        ′                                            )                                                                      )                            ⁢                              dt                ′                                              +                                    y              1                        ⁡                          (              0              )                                                          (        8        )            
Insofar as the inputs are not zero, then the outputs of the integrating elements will not be constants. Because of this, the output of the integrating system will change continuously, rather than being a constant value. For example, in terms of the example of the tank, if the flow rate of the inflow and the flow rate of the outflow are not in equilibrium, then there will be continuous change in the volume of fluid in the tank. In a state wherein the flow rate for the inflow is continuously greater than the flow rate for the outflow, then the fluid within the tank will overflow, but if it is the flow rate of the outflow that is the greater, then eventually the tank will become empty. Of course, normally control is required to prevent this from happening.
Even in actual processes, there are those that have integrating elements, and these may be subject to optimization. However, the techniques described above cannot be applied as-is to these processes. The reason for this is that when the process includes an integrating element, G0 will be unbounded. This is because the transfer function of the integrating element is K/s, which goes to infinity when s=0. Because of this, the techniques for optimizing processes that include integrating elements are different from normal. For example, Section 3.3.4 (Page 753) of Qin describes two methods as techniques for steady-state optimization of processes that have integrating elements, written for industrial application of modeled predictive control.
One is a method that adds a parameter that the slope of the control variable that includes the output of the integrating element (hereinafter termed a “control variable that includes an integrating system”) be zero. This configuration is illustrated in FIG. 18. The difference from a configuration when there is no integrating element that is a control object is the addition of the integrating system slope predicting portion 408. Here the values to which the slopes of the control variables included in the integrating system converge are predicted assuming no changes in the inputs into the processes. The target value calculating portion 407 determines target values so that the sums of the magnitudes of changes for it these values and for the slopes of the control variables will always be zero.
One example is illustrated below. The control variables are expressed as y1 and y2 and the manipulated variables are expressed as u1 and u2, where there is no integrating element in the transfer function from u1 and u2 to y1, and where the transfer function from u1 and u2 to y2 includes an integrating element. Moreover, the slope of y2 is expressed as y2(SL). Given these assumptions, Equation (9), wherein the optimizing calculation for finding the target values is formed into an equation, is given below.Expression 8u(k−1)=[u1(k−1),u2(k−1)]T,y(∞)=[y1(∞)]T Δu=[Δu1,Δu2]T,Δy=[Δy1]T Δx=[ΔuTΔyT]T J(Δx)=ΔxTHΔx+hTΔx Δy=[G1,1(0)G1,2(0)]Δu ui−ui(k−1)≤Δui,≤ui−ui(k−1), for i=1,2y1−y1(∞)≤Δy1≤y1−y1(∞)  (9)
In the equations up to this point, there are essentially no differences from Equations (5) through (7) except for y2 being outside the scope of application. In addition to this, the following new equation, Equation (10) is added.
                    Expression        ⁢                                  ⁢        9                                                                                                                y                2                                  (                  SL                  )                                            ⁡                              (                ∞                )                                      +                          Δ              ⁢                                                          ⁢                              y                2                                  (                  SL                  )                                                              =          0                ⁢                                  ⁢                              Δ            ⁢                                                  ⁢                          y              2                              (                SL                )                                              =                      S            ⁢                                                  ⁢            Δ            ⁢                                                  ⁢            u                          ⁢                                  ⁢                  S          =                                                    lim                ⁢                                                                                              s                →                0                                      ⁢                          s              ⁡                              [                                                                                                                              G                                                      2                            ,                            1                                                                          ⁡                                                  (                          s                          )                                                                                                                                                              G                                                      2                            ,                            2                                                                          ⁡                                                  (                          s                          )                                                                                                                    ]                                                                        (        10        )            
Here y2(SL)(∞) is the value that is predicted by the integrating system slope predicting portion. In this value, the term that adds the magnitude of change Δy2(SL) of the slope of y2 that is produced through the magnitude of change Δu of u is the left side of the first line of Equation (10), where this is added as a parameter that must be satisfied being equal to zero. Note that Gj,i(s) indicates the transfer function from the manipulated variable i to the control variable j.
The operation of the target value calculating portion 407 will be explained again using FIG. 19. The points of difference for the maximum when there is no integrating system are the slope gain matrix calculating portion 474 and the slope upper and lower limit setting portion 475. Note that the target value calculating portion 407 is provided with a steady-state gain matrix calculating portion 473. The slope upper and lower limit setting portion 475 corresponds to the first and second lines in Equation (10), and sets the upper and lower limits on the slopes of control variables that include an integrating system. Generally, as in the right side in Equation (10), the upper limit=the lower limit=0. The slope gain matrix calculating portion 474 calculates a matrix that defines the relationship between Δu and the magnitude of change Δy2(SL) in the slope of y2, that is, the matrix S in the third line in Equation (10). Ax that minimizes the evaluating function J(Δx) is calculated based on Equation (9) by the solution calculating portion 472. The process for obtaining the optimal steady-state target value based on Equation (9) by the solution calculating portion 472 is fundamentally the same as in the conventional technology. Doing this makes it possible to perform optimization even for a process that includes an integrating element.
The other method is shown in Qin is one wherein a term that is proportional to the size of the slope of the control variable that includes the integrating system (for example, the square of the slope) is added, as a penalty function, to the evaluating function that is to be optimized. The overall structure is the same as in the method described above, but, as illustrated in FIG. 20, there is one difference from the structure in the target value calculating portion 407.
The slope penalty setting portion 475a sets a penalty function in relation to the slope of the control variable that includes the integrating system. The penalty function is defined as P(Δx), and is expressed by Equation (11).Expression 10P(Δx)=p(y2(SL)(∞)+Δy2(SL))2=p(y2(SL)(∞)+SΔu)2  (11)
The meanings of y2(SL)(∞), Δy2(SL), Δu, and S are the same as in the example described above. p is a constant that determines the size of the penalty. This Equation (11) means that a penalty is applied that is proportional to the square of the slope of y2.
Moreover, the solution calculating portion 472 is provided with an evaluating function correcting portion 473a, where the evaluating function correcting portion 473a adds a penalty function P(Δx) wherein a slope penalty setting portion is provided in the evaluating function J(Δx) that is applied, to produce a new evaluating function. This evaluating function is used in the solution calculating portion 472 instead of J(Δx). When the evaluating function after correction is expressed by Jmodified(Δx), the result is an equation such as: “Jmodified(Δx)=J(Δx)+P(Δx) . . . (12)”
The solution calculating portion 472 calculates the Δx that minimizes Jmodified(Δx). Because of this, if the constant P relating to the penalty term is set at an adequately large value, then reducing the slope of the control variable will take priority over minimizing the actual evaluating function. As a result, it is possible to obtain a target value wherein the given optimization evaluating function is as small as possible while keeping the value of the slope of the control variable adequately small.
The use of these methods makes it possible to optimize the target value in a steady-state even in a process that is an integrating system.
However, in the methods described for the conventional technology there is a problem in that it is not possible to obtain the optimal solution even when a better optimal target value actually exists. For example, let us consider a process that has an integrating element, with one manipulated variable and one control variable, as illustrated in Equation (13), below:
                    Expression        ⁢                                  ⁢        11                                                                                  y            1                    ⁡                      (            t            )                          =                              ∫            0            t                    ⁢                                    (                                                                    u                    1                                    ⁡                                      (                                          t                      ′                                        )                                                  -                d                            )                        ⁢            d            ⁢                                                  ⁢                          t              ′                                                          (        13        )            
In Equation (13), u1 is a manipulated variable (input), y1 is a control variable (output), and d is a constant in the process. Here the optimization object is that of minimizing the manipulated variable u1. If this process behaves strictly in accordance with Equation (13), then the solution by the conventional technology will be u1=d, or a value near thereto. This is because if the value of u1 is other than d, then y1 will have a slope, and the conventional technology determines a target value so this does not occur.
However, in an actual process, unlike the assumption described above, the behavior is not always in accordance with Equation (13). For example, there are also processes that have nonlinear characteristics, such as d being dependent on the control variable y1, such as in Equation (14), below.
                    Expression        ⁢                                  ⁢        12                                                                                  y            1                    ⁡                      (            t            )                          =                              ∫            0            t                    ⁢                                    (                                                                    u                    1                                    ⁡                                      (                                          t                      ′                                        )                                                  -                                  d                  ⁡                                      (                                          y                      1                                        )                                                              )                        ⁢            d            ⁢                                                  ⁢                          t              ′                                                          (        14        )            
Here let us assume that the process has the characteristic of the value of d also falling monotonically with a reduction in y1, as illustrated in FIG. 21. For example, if d is 10 when y1 is 400 but d goes to 8 when y1 goes to 80, then this is a process wherein there is a positive correlation between y1 and d. Moreover, let us assume that the lower limit values for u1 and y1 are, respectively, 8 and 80, and the current values are 10 and 100. In this case, in the conventional technology, the optimal target value is determined based on the current estimated value of the slope of y1, and thus the target value for u1 will be 10.
However, in actuality, if u1 were controlled so as to be smaller than 10, that is, if u1 and y1 were to be reduced gradually along the heavy line as illustrated in FIG. 21, then the value of u1 that causes the slope of y1 to go to zero would also become smaller, making it possible to further optimize the target value for u1. If such control were to be continued, then, ultimately, it would be possible to reduce the target value for u1 to 8. Summarizing the above, in a case wherein d is dependent on y1, optimization of the target value is not possible in the conventional technology; in other words, there is still room for further optimization. In this way, conventionally there is a problem in that appropriate control that reflects the actual characteristics of the control object has not been possible.
The present invention was created in order to eliminate problem areas such as set forth above, and an aspect thereof is to be able to control more appropriately given the actual characteristics of the control object.