Constrained non-linear optimization problems are composed of a non-linear objective function and may be subject to linear, bound and non-linear constraints. The constrained non-linear optimization problem to be solved may be represented as                Minimize f(x)                    x                        Such thatCi(x)≦0,i=1 . . . m Ci(x)=0,i=m+1 . . . mt Ax≦b Aeqx=beq LB≦x≦UB  (A)where, Ci(x) is non-linear inequality and equality constraint. The integer “m” is the number of non-linear inequality constraints, and “mt” is the total number of non-linear constraints. Ax≦b and Aeqx=beq are linear constraints, LB and UB are lower and upper bounds on the decision variable x.        
There are a number of conventional approaches which have been utilized to attempt to solve the constrained non-linear optimization problem, each of which suffers from a number of drawbacks. The most popular approach to perform optimization subject to the non-linear constraints is to formulate a sub-problem with a penalty parameter, and then solve the sub-problem. The sub-problem formulated from the original problem (A) is shown below:
      Θ    ⁡          (              x        ,        ρ            )        =            f      ⁡              (        x        )              -          ρ      ⁡              (                                            ∑                              i                =                1                            m                        ⁢                          max              ⁡                              (                                  0                  ,                                      Ax                    -                    b                                                  )                                              +                                    ∑                              i                =                                  m                  +                  1                                            mt                        ⁢                                                                                              A                    eq                                    ⁢                  x                                -                                  b                  eq                                                                            +                                    ∑                              i                =                1                            m                        ⁢                          max              ⁡                              (                                  0                  ,                                                            c                      i                                        ⁡                                          (                      x                      )                                                                      )                                              +                                    ∑                              i                =                                  m                  +                  1                                            mt                        ⁢                                                                          c                  i                                ⁡                                  (                  x                  )                                                                                  )            This sub-problem is solved using a fixed value for the penalty ‘ρ’, or by making ρ dependent on the iteration number. The non-linear constraints are included in the sub-problem formulation. Some implementations have included linear constraints in the sub-problem as well. The sub-problem is minimized subject to bound constraints only. The solution to the sub-problem is difficult in that it requires the user to choose the right value of ρ for different problems. If the linear constraints are used in the sub-problem, then the problem becomes more difficult to solve.
A filter-based approach is primarily used in multi-objective optimization but some researchers have applied the same technique to handle the non-linear constraints in a constrained optimization problem. The approach uses a pattern search method for general constrained optimization based on the filter method for the step acceptance. Essentially, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. This algorithm can be used for the optimization problems with non-linear inequality constraints only, as shown in (B) below                Minimize f(x)                    x                        Such thatCi(x)=1 . . . m Ax≦b Aeqx=beq LB≦x≦UB  (B)Unfortunately, this algorithm lacks a convergence theory for the direct search class of algorithms. The lack of convergence theory may prevent the solution from satisfying the Karush Kuhn Tucker (KKT) condition of optimality, an essential condition for constrained optimization. The solution obtained by using this algorithm may not be guaranteed to be a minimum to the original problem A. Also, it is not efficient for problems with the large number of non-linear constraints.        
An additional approach uses an augmented Lagrangian formulation of a non-linearly constrained optimization. This approach can be used for optimization problems with non-linear equality and bound constraints only, as shown in (C) below:                Minimize f(x)                    x                        Such thatCi(x)=0,i=1 . . . mt LB≦x≦UB  (C)The key drawback of this algorithm is that it treats the linear constraints as non-linear making it inefficient in handling large number of constraints. It also uses a slack variable (which is a variable that is introduced when inequality constraints are replaced by equalities) to convert each inequality constraint to an equality constraint. The use of slack variables causes an increase of the problem size.        
Thus, the penalty approach requires the difficult selection of an appropriate penalty parameter, the filter search is used with non-linear inequality constraints only and does not scale well, and the augmented Lagrangian formulation approach treats linear and non-linear constraints the same and relies on slack variables which come with an overhead price.