(1) Field of Invention
The present invention relates to a system for high-dimensional optimization and, more particularly, to a system for high-dimensional optimization utilizing self-aware particle swarm optimization.
(2) Description of Related Art
The present invention is related to a high-dimensional optimization system for applications such as network optimization, computer vision, and smart antennas, using a modification of particle swarm optimization. Current prior art consists of evolutionary heuristic optimization algorithms, such as genetic algorithms and particle swarm optimization (PSO).
PSO is a simple but powerful population-based algorithm that is effective for optimization of a wide range of functions as described by Kennedy et al. in “Swarm Intelligence”, San Francisco: Morgan Kaufmann Publishers, 2001, and by Eberhart and Shi in “Particle Swarm Optimization: Developments, Applications, and Resources,” in Proceedings of Institute of Electrical and Electronics Engineers (IEEE) Congress on Evolutionary Computation (CEC 2001), Korea, 2001, which are hereby incorporated by reference as though fully set forth herein.
PSO models the exploration of multi-dimensional solution space by a “swarm” of agents where the success of each agent has an influence on the dynamics of other members of the swarm. PSO has its roots in theories of social interaction. Each “particle” in the swarm resides in a multi-dimensional solution space. The positions of the particles represent candidate problem solutions. Additionally, each particle has a velocity vector that allows it to explore the space in search of an objective function optima. Each particle i keeps track of a position vector {right arrow over (y)}i that represents the current best solution the particle has found. Another position vector {right arrow over (Y)}g is used to store the current global best solution found by all of the particles. The velocity and position vectors for particle i are then changed probabilistically according to the following set of dynamic update equations:{right arrow over (vi)}(t+1)−w{right arrow over (vi)}(t)+c1q2[{right arrow over (yi)}(t)−{right arrow over (xi)}(t)]+c2q2[{right arrow over (y)}g(t)−{right arrow over (xi)}(t)]{right arrow over (xi)}(t+1)={right arrow over (xi)}(t)+x{right arrow over (vi)}(t+1)where {right arrow over (xi)}(t) and {right arrow over (vi)}(t) are the position and velocity vectors at time t of the i-th particle and c1 and c2 are parameters that weight the influence of the “individual best” and “swarm best” terms. w is a momentum constant that prevents premature convergence, and x is a constriction factor which also influences the convergence of PSO. q1 and q2 are random variables that allow the particles to better explore the solution space. The described dynamics cause the swarm to concentrate on promising regions of solution space very quickly with very sparse sampling of the solution space. FIG. 1 illustrates a model of PSO depicting a multi-dimensional parameter solution space 100 through which particles, for example particle Pi 102, travel in search of the objective function optima. As described previously, the positions of the particles represent vectors of multi-node parameter values in the solution space 100. In addition, each of the particles, including Pi 102, has a velocity vector 104 that allows it to explore the multi-dimensional parameter solution space 100.
Although PSO is a relatively new area of research, extensive literature exists that documents its efficiency and robustness as an optimization tool for high dimensional spaces as described in the Special issue of IEEE Trans. on Evol. Computation on Particle Swarm Optimization, Vol. 9, No. 3, June, 2004 and by Hassan et al. in “A Comparison of Particle Swarm Optimization and the Genetic Algorithm,” American Institute of Aeronautics and Astronautics (AIAA) Conference, 2005, which are hereby incorporated by reference as though fully set forth herein.
Both theoretical analysis and practical experience show that PSO converges on good solutions for a wide range of parameter values. The evolution of good solutions is stable in PSO because of the way solutions are represented (e.g. small changes in the representation result in small changes in the solution). Simulations have shown that the number of particles and iterations required are relatively low and scale slowly with the dimensionality of the solution space.
While the above methods are effective, the present invention provides an alternative to the prior art by providing faster convergence and the ability to handle higher dimensional problems with the same computational resources by adapting algorithm parameters automatically to the problem through self-monitoring.