- Cooperative optimization
Cooperative optimization is a
global optimization method invented bychinese mathematician Xiao-Fei Huang, that can solve real-worldNP-hard optimization problems (up to millions of variables) with outstanding performances and unprecedented speeds. It knows whether a solution it found is a global optimum and which direction is more promising than others for finding a global optimum. With some very general settings, a cooperative optimization algorithm has a unique equilibrium and converges to it regardless of initial conditions and is insensitive to perturbations. It does not struggle with local minima.Cooperative optimization is based on a system of multiple agents cooperating to solve a hard optimization problem. The problem is divided into a number of manageable sub-problems which are each assigned to an agent. The cooperation is achieved by asking the agents to pass their solutions to their neighbors and to compromise their solutions if they conflict. When all conflicts are resolved, the system has found a consensus solution as the solution to the original problem. For a certain class of cooperative optimization algorithms, a consensus solution must be the global optimal one, guaranteed by theory.
The process of a cooperative system of this kind is
iterative and self-organized and each agent in the system is autonomous. The system is also inherently distributed and parallel, making the entire system highlyscalable and less vulnerable to perturbations and disruptions than a centralized system.The principle of cooperative optimization
Different cooperation schemes can lead cooperative optimization algorithms to completely different computational behaviors. With a certain cooperation scheme, cooperative optimization in principle can be understood as a lower bounding technique for finding global optima. A cooperative optimization algorithm deploys a function of some simple form as a lower bound function to an objective function for finding its global optimum. It progressively tightens the lower bound function until its global minimum touches the global minimum of the original objective function. Then the global minimum of the original objective function is found by searching the global minimum of the lower bound function instead, which is much simpler in computation than searching the global minimum of the original objective function . There are other lower bounding techniques based on principles different from the one of cooperative optimization. Examples are
Lagrangian relaxation techniques,cutting plane techniques,branch and bound algorithms, andbranch and cut algorithms.For example, given an objective function of variables, . One simple form for the lower bound function of cooperative optimization can be
: Obviously, the global minimum of the above function can be easily found.
Assume that the objective function can be decomposed into n sub-objective functions,
:
The lower bound function can also be decomposed n sub-functions as follows
:
If we compute a new lower bound function of the same form as as follows
:
where is the set of variables contained in and stands for minimizing with respect to all variables in excluding , then it is straightforward to prove that
:
is a lower bound function of as long as is one. That is
: as long as .
Without loss of generality, assume that all objective functions including the sub-objective functions are nonnegative functions. If we choose the initial condition as , for , then the cooperative optimization theory tells us that the lower bound function computed by the above
difference equation s can be progressively tightened,:
At some iteration "k", if the lower bound function has been tightened enough so that its global minimum equals to the global minimum of the original objective function , i.e.,
:
then the global minimum of is found which is the same as the global minimum of the lower bound function ,
:
The global minimum of of the form can be easily obtained as
:
Cooperative optimization in its basic form
Given a multivariate objective function of n variables, or simply denoted as , where each variable is of a finite domain. Assume that can be broken into sub-objective functions , satisfying
* ,
* contains at least variable ,
* the minimization of , for , is computationally manageable in complexity.A cooperative optimization algorithm is defined as minimizing the sub-objective functions in an iterative and cooperative way as follows,
:
where the cooperation is introduced by modifying the i-th sub-objective function at the iteration k using the minimization solution of the j-th (modified) sub-objective function from the previous iteration k-1. That is, the solution of minimizing the i-th sub-objective function is compromised with the solutions of minimizing other sub-objective functions.
Specifically, in the difference equations, "k" is the iteration step; the right side of the equation, for each "i", is the "i"-th modified objective function; the left side of the equation, , is a unary function on introduced by the algorithm for storing the solution of minimizing the i-th modified objective function. is the set of variables contained in and stands for minimizing with respect to all variables in excluding . and () are parameters of the algorithm satisfying and , for .
The coefficient is a parameter for controlling the level of cooperation at minimizing the sub-objective functions and is so called the cooperation strength. The cofficients control the propagation of , the solutions of minimizing the (modified) sub-objective functions. All of s together form a matrix called the propagation matrix.
The solution of the algorithm at iteration "k" is defined as
:, for .
Consensus solution and global optimality conditions
Given any variable, say , it may be contained in several modified sub-objective functions. At each iteration, has a value in the optimal solution for minimizing each of the modified sub-objective functions containing the variable. Those values may not be the same. If all of them are of the same value at some iteration, this is called a consensus assignment for that variable. If a consensus assignment is simultaneously reached for every variable,this is called a consensus solution. If a cooperative optimization algorithm reaches an equilibrium after some number of iterations and a consensus solution is found, then the solution must be the global minimal solution.
General convergence properties of cooperative optimization
It has been shown in theory that a cooperative optimization algorithm in its basic form defined by the set of difference equations has many important computational properties not possessed by many other optimization algorithms. Given a constant cooperation strength and the propagation coefficients , the algorithm has one and only one equilibrium. It always converges to the unique equilibrium with an exponential rate regardless of initial conditions and perturbations.
Hence, cooperative optimization algorithms are stable (unique equilibrium), fast (exponential convergence rate), and robust (insensitive to initial conditions and perturbations). Due to the global optimality conditions possessed by the cooperative optimization method, it knows whether a solution it found is a global optimum and which direction is more promising than others for finding a global optimum. Because of that, it knows how to organize its optimization process effectively and when to terminate the process efficiently. The cooperative optimization method does not struggle with local minimums.
Cooperative optimization in a simple form
The implementation of the cooperative optimization in its basic form can be overwhelming since it has many parameters to be set, for and a series of values for . Often in practice, we set all nonzero to be of the same value and simplify the difference equations of cooperative optimization to the following form
:
where is a parameter controlling the cooperation strength. If it is of a small positive value, the cooperation among solving the n sub-problems is weak. Otherwise, if it is of a large positive value, the cooperation among solving the n sub-problems is strong. If it is too large, the iteration of the simplified cooperative optimization may not converge. Hence, the parameter is to be tuned in experiments to maximize the performance of cooperative optimization.
To improve the convergence property of cooperative optimization and its performance further, the unary functions computed at each iteration can be offsetted by a value, say the minimal value of . That is
:
Thus, the offsetting defines an operator on , denoted as
:
With the notation, the simplified cooperative optimization becomes
:
The offsetting technique has been found to be a very effective technique in practice at solving real-world optimization problems.
Cooperative optimization and quantum mechanics
Interestingly enough, cooperative optimization in a continuous-time version at its equilibrium has the same form of the time-independent
Schrödinger equation . It could offer us adeterministic interpretation ofquantum mechanics besidesMax Born 's probabilistic interpretation accepted by almost everybody today. The deterministic interpretation suggests that nature probably needs to follow a global optimization process so that (sub)atomic systems can most often find their ground states (global minimal energy). Otherwise, it is hard to have a reasonable explanation with a scientific base for the stability ofatoms andmolecules in the world, e.g., why mostproteins in nature can fold into their native structures so that the creation of life is possible.Given an energy function
:
where
:
The dynamics of cooperative optimization on
semi-ring has the following form,:
where the function is the soft decision for assigning variable . Given any variable value, the function gives the degree of preference of assigning that value to the variable. is a normalization factor such that . With the smoothing of for improving the convergence of the algorithm, the dynamics defined by the cooperative optimization describes a dissipative quantum system with
quantum decoherence caused by quantum background noise and the interaction of the system with its environment. When the dynamics evolves into a stationary state, it becomes the well-known time-independent Schrödinger equation,:
where
:
References
Authored by X.Huang
* X. Huang, “Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization”, accepted by IEEE Information Theory Workshop, Chengdu, China, 2006 (also available online [http://arxiv.org/abs/cs/0609088 here] ).
* X. Huang, ``The cooperative optimization metaheuristic: Inspiration from nature and applications," Computational Intelligence, ICIC 2005, Springer-Verlag, LNAI 4114, 2006, pp. 1246–1251.
* X. Huang, ``A general extension of constraint propagation for constraint optimization," Principles of Practice of Constraint Programming — CP 2004, Spinger-Verlag, LNCS 3258, 2004, pp. 737–741.
* X. Huang, ``Near perfect decoding of LDPC codes," Proceedings of IEEE International Symposium on Information Theory (ISIT), 2005, pp. 302–306 (also available online [http://arxiv.org/abs/cs/0504021 here] ).
* X. Huang, “A New Kind of Hopfield Networks for Finding Global Optimum”, International Joint Conference on Neural Networks, Montreal, Canada, 2005, pp.764–769 (also available online [http://arxiv.org/abs/cs/0505003 here] ).
* X. Huang, ``Cooperative optimization for solving large scale combinatorial problems," Theory and Algorithms for Cooperative Systems, Series on Computers and Operations Research, World Scientific, 2004, pp. 117–156 (ISBN 981-256-020-3).
* X. Huang, ``Image segmentation by cooperative optimization," IEEE International Conference on Image Processing (ICIP), Singapore, 2004, pp. 945–948.
* X. Huang, ``Cooperative optimization for energy minimization in computer vision: A case study of stereo matching," Pattern Recognition, 26th DAGM Symposium, Springer-Verlag, LNCS 3175, 2004, pp. 302–309.* X. Huang, ``A general framework for constructing cooperative global optimization algorithms," Frontiers in Global Optimization, Nonconvex Optimization and Its Applications, Kluwer Academic Publishers, 2004, pp. 179–221 (ISBN 1-4020-7699-1).
* X. Huang, “A Polynomial Time Algorithm for Solving NP-hard Problems in Practice”, ACM SIGACT Volume 34, Issue 1, March 2003, pp. 101–108.
* X. Huang, “A Cooperative Search Approach for Global Optimization”, Oral Presentation at the First International Conference on Optimization Methods and Software, Hangzhou, China, 2002.
Jointly authored with X.Huang
* Ding Suquan, Huang Xiaofei, Yang Zhixing (2007) "Iterative soft-decision decoding of Reed-Solomon codes based on cooperative optimization algorithm", CHINESE HIGH TECHNOLOGY LETTERS 2007 Vol.17 No.12 P. 1234–1237.
* HUANG Xiao-fei, DING Su-quan, YANG Zhi-xing, WU You-shou (2006) "Decoding of Low-density Parity-check (LDPC) Codes over GF("q") and Their Applications in Deep Space Communication", "Journal of Spacecraft TT & C Technology"
Other authors
* Belén Melián Batista, José A. Moreno Pérez, J. Marcos, Moreno Vega (2006) "Nature-inspired Decentralized Cooperative Metaheuristic Strategies for Logistic Problems", NiSIS 2006, European Symposium on Nature-inspired Smart Information Systems.
* Zeng-Fu Wang, Zhi-Gang Zheng "A Region Based Stereo Matching Algorithm Using Cooperative Optimization"
Wikimedia Foundation. 2010.