Cooperative optimization

Cooperative optimization

Cooperative optimization is a global optimization method invented by chinese mathematician Xiao-Fei Huang, that can solve real-world NP-hard optimization problems (up to millions of variables) with outstanding performances and unprecedented speeds. It knows whether a solution it found is a global optimum and which direction is more promising than others for finding a global optimum. With some very general settings, a cooperative optimization algorithm has a unique equilibrium and converges to it regardless of initial conditions and is insensitive to perturbations. It does not struggle with local minima.

Cooperative optimization is based on a system of multiple agents cooperating to solve a hard optimization problem. The problem is divided into a number of manageable sub-problems which are each assigned to an agent. The cooperation is achieved by asking the agents to pass their solutions to their neighbors and to compromise their solutions if they conflict. When all conflicts are resolved, the system has found a consensus solution as the solution to the original problem. For a certain class of cooperative optimization algorithms, a consensus solution must be the global optimal one, guaranteed by theory.

The process of a cooperative system of this kind is iterative and self-organized and each agent in the system is autonomous. The system is also inherently distributed and parallel, making the entire system highly scalable and less vulnerable to perturbations and disruptions than a centralized system.

The principle of cooperative optimization

Different cooperation schemes can lead cooperative optimization algorithms to completely different computational behaviors. With a certain cooperation scheme, cooperative optimization in principle can be understood as a lower bounding technique for finding global optima. A cooperative optimization algorithm deploys a function of some simple form as a lower bound function to an objective function for finding its global optimum. It progressively tightens the lower bound function until its global minimum touches the global minimum of the original objective function. Then the global minimum of the original objective function is found by searching the global minimum of the lower bound function instead, which is much simpler in computation than searching the global minimum of the original objective function . There are other lower bounding techniques based on principles different from the one of cooperative optimization. Examples are Lagrangian relaxation techniques, cutting plane techniques, branch and bound algorithms, and branch and cut algorithms.

For example, given an objective function of n variables, E(x_1, x_2, ldots, x_n). One simple form for the lower bound function of cooperative optimization can be

: Psi (x) = Psi_1(x_1) + Psi_2(x_2) + cdots + Psi_n(x_n). Obviously, the global minimum of the above function Psi(x) can be easily found.

Assume that the objective function E(x) can be decomposed into n sub-objective functions,

:E(x) = E_1(x) + E_2(x) + cdots + E_n(x).

The lower bound function Psi(x) can also be decomposed n sub-functions as follows

:Psi(x) = sum^{n}_{i=1} w_{ij} Psi_j (x_j), ext{ where }sum_i w_{ij} = 1 ext{ for }j=1,2,ldots,n.

If we compute a new lower bound function Psi^{'}(x) of the same form as Psi(x) as follows

: Psi^{'}_i (x_i) = min_{X_i setminus{x_ileft( left(1 - lambda_k ight) E_i + lambda_k sum_{j} w_{ij} Psi_j(x_j) ight), ext{ for }i=1,2,ldots,n,

where X_i is the set of variables contained in E_i(x) and min_{ X_i setminus{x_i stands for minimizing with respect to all variables in X_i excluding x_i, then it is straightforward to prove that

:Psi^{'}(x)=Psi^{'}_1(x_1) + Psi^{'}_2(x_2) + cdots + Psi^{'}_n(x_n)

is a lower bound function of E(x) as long as Psi(x) is one. That is

:Psi^{'}(x) le E(x) as long as Psi(x) le E(x).

Without loss of generality, assume that all objective functions including the sub-objective functions E_i(x) are nonnegative functions. If we choose the initial condition as Psi^{(0)}_i(x_i) = 0, for i=1,2,ldots, n, then the cooperative optimization theory tells us that the lower bound function Psi(x) computed by the above difference equations can be progressively tightened,

: Psi^{(0)}(x) le Psi^{(1)}(x) le cdots le Psi^{(k)}(x) le E(x).

At some iteration "k", if the lower bound function Psi^{(k)}(x) has been tightened enough so that its global minimum equals to the global minimum of the original objective function E(x), i.e.,

:min_{x} Psi^{(k)}(x) = min_{x} E(x),

then the global minimum of E(x) is found which is the same as the global minimum of the lower bound function Psi^{(k)}(x),

:arg min_{x} E(x) = arg min_{x} Psi^{(k)}(x).

The global minimum of Psi^{(k)}(x) of the form sum_i Psi^{(k)}_i(x_i) can be easily obtained as

:x^{*(k)}_i(Psi^{(k)}(x)) = arg min_{x_i} Psi^{(k)}_i (x_i), ext{ for }i = 1, 2, ldots, n.

Cooperative optimization in its basic form

Given a multivariate objective function E(x_1, x_2, ldots, x_n) of n variables, or simply denoted as E(x), where each variable is of a finite domain. Assume that E(x) can be broken into n sub-objective functions E_i(x), satisfying

* E(x) = E_1(x) + E_2(x) + cdots + E_n(x),
* E_i(x) contains at least variable x_i,
* the minimization of E_i(x), for i=1,2,ldots,n, is computationally manageable in complexity.

A cooperative optimization algorithm is defined as minimizing the n sub-objective functions in an iterative and cooperative way as follows,

: Psi^{(k)}_i (x_i) = min_{X_i setminus{x_ileft( left(1 - lambda_k ight) E_i(x) + lambda_k sum_{j} w_{ij} Psi^{(k-1)}_j(x_j) ight), ext{ for }i=1,2,ldots,n,

where the cooperation is introduced by modifying the i-th sub-objective function at the iteration k using the minimization solution of the j-th (modified) sub-objective function Psi^{(k-1)}_j(x_j) from the previous iteration k-1. That is, the solution of minimizing the i-th sub-objective function is compromised with the solutions of minimizing other sub-objective functions.

Specifically, in the difference equations, "k" is the iteration step; the right side of the equation, for each "i", is the "i"-th modified objective function; the left side of the equation, Psi^{(k)}_i(x_i) , is a unary function on x_i introduced by the algorithm for storing the solution of minimizing the i-th modified objective function. X_i is the set of variables contained in E_i(x) and min_{ X_i setminus{x_i stands for minimizing with respect to all variables in X_i excluding x_i. lambda_k and w_{ij} (1 le i,j le n) are parameters of the algorithm satisfying 0 le lambda_k < 1 and w_{1,j} + w_{2,j} + cdots + w_{n,j}= 1 , for j=1,2,ldots, n.

The coefficient lambda_k is a parameter for controlling the level of cooperation at minimizing the sub-objective functions and is so called the cooperation strength. The cofficients w_{ij} control the propagation of Psi^{(k-1)}_j(x_j), the solutions of minimizing the (modified) sub-objective functions. All of w_{ij}s together form a n imes n matrix called the propagation matrix.

The solution of the algorithm at iteration "k" is defined as

: ilde{x}^{(k)}_i = arg min_{x_i} Psi^{(k)}_i (x_i), for i=1,2,ldots, n.

Consensus solution and global optimality conditions

Given any variable, say x_i, it may be contained in several modified sub-objective functions. At each iteration, x_i has a value in the optimal solution for minimizing each of the modified sub-objective functions containing the variable. Those values may not be the same. If all of them are of the same value at some iteration, this is called a consensus assignment for that variable. If a consensus assignment is simultaneously reached for every variable,this is called a consensus solution. If a cooperative optimization algorithm reaches an equilibrium after some number of iterations and a consensus solution is found, then the solution must be the global minimal solution.

General convergence properties of cooperative optimization

It has been shown in theory that a cooperative optimization algorithm in its basic form defined by the set of difference equations has many important computational properties not possessed by many other optimization algorithms. Given a constant cooperation strength lambda_k = lambda and the propagation coefficients w_{ij}, the algorithm has one and only one equilibrium. It always converges to the unique equilibrium with an exponential rate regardless of initial conditions and perturbations.

Hence, cooperative optimization algorithms are stable (unique equilibrium), fast (exponential convergence rate), and robust (insensitive to initial conditions and perturbations). Due to the global optimality conditions possessed by the cooperative optimization method, it knows whether a solution it found is a global optimum and which direction is more promising than others for finding a global optimum. Because of that, it knows how to organize its optimization process effectively and when to terminate the process efficiently. The cooperative optimization method does not struggle with local minimums.

Cooperative optimization in a simple form

The implementation of the cooperative optimization in its basic form can be overwhelming since it has many parameters to be set, n imes n for w_{ij} and a series of values for lambda_k. Often in practice, we set all nonzero w_{ij} to be of the same value and simplify the difference equations of cooperative optimization to the following form

: Psi^{(k)}_i (x_i) = min_{X_i setminus{x_ileft( E_i(x) + alpha sum_{j} Psi^{(k-1)}_j(x_j) ight), ext{ for }i=1,2,ldots,n,

where alpha is a parameter controlling the cooperation strength. If it is of a small positive value, the cooperation among solving the n sub-problems E_i(x) is weak. Otherwise, if it is of a large positive value, the cooperation among solving the n sub-problems is strong. If it is too large, the iteration of the simplified cooperative optimization may not converge. Hence, the parameter alpha is to be tuned in experiments to maximize the performance of cooperative optimization.

To improve the convergence property of cooperative optimization and its performance further, the unary functions Psi^{(k)}_i(x_i) computed at each iteration can be offsetted by a value, say the minimal value of Psi^{(k)}_i(x_i). That is

:Psi^{(k)}_i(x_i) := Psi^{(k)}_i(x_i) - min_{x_i} Psi^{(k)}_i(x_i).

Thus, the offsetting defines an operator on Psi^{(k)}_i(x_i), denoted as

: {it O} left(Psi^{(k)}_i(x_i) ight).

With the notation, the simplified cooperative optimization becomes

: Psi^{(k)}_i (x_i) = {it O}left(min_{X_i setminus{x_ileft( E_i(x) + alpha sum_{j} Psi^{(k-1)}_j(x_j) ight) ight), ext{ for }i=1,2,ldots,n,

The offsetting technique has been found to be a very effective technique in practice at solving real-world optimization problems.

Cooperative optimization and quantum mechanics

Interestingly enough, cooperative optimization in a continuous-time version at its equilibrium has the same form of the time-independent Schrödinger equation. It could offer us a deterministic interpretation of quantum mechanics besides Max Born's probabilistic interpretation accepted by almost everybody today. The deterministic interpretation suggests that nature probably needs to follow a global optimization process so that (sub)atomic systems can most often find their ground states (global minimal energy). Otherwise, it is hard to have a reasonable explanation with a scientific base for the stability of atoms and molecules in the world, e.g., why most proteins in nature can fold into their native structures so that the creation of life is possible.

Given an energy function

: E(x_1, x_2, ldots, x_n) = sum^{n}_{i=1} left(e_i (x_{i}) + sum^{n}_{j:~j > i} e_{ij} (x_i, x_j) ight),

where

:e_{ij}(x_i, x_j) = e_{ji}(x_j, x_i).

The dynamics of cooperative optimization on semi-ring (C,+, imes) has the following form,

: psi_i(x_i, t + Delta t) = frac{1}{Z_i (t + Delta t)} psi_i(x_i, t) e^{-(Delta t/hbar)e_i(x_i)} prod_{j, j ot = i} int d x_j ; e^{-(Delta t/hbar)e_{ij}(x_i, x_j)} left|psi_j (x_j, t) ight|^2,

where the function |psi_i(x_i, t+1)| is the soft decision for assigning variable x_i. Given any variable value, the function gives the degree of preference of assigning that value to the variable. Z_i(t+Delta t) is a normalization factor such that int d x_i |psi_i(x_i, t)|^2 = 1. With the smoothing of psi_i(x_i) for improving the convergence of the algorithm, the dynamics defined by the cooperative optimization describes a dissipative quantum system with quantum decoherence caused by quantum background noise and the interaction of the system with its environment. When the dynamics evolves into a stationary state, it becomes the well-known time-independent Schrödinger equation,

:E_i psi_i(x_i, t) = left( -frac{hbar^2}{2 m_i} abla^2_i + V_i(x_i) ight) psi_i(x_i, t),

where

:V_i(x_i) = e_i + sum_{j, j ot = i} int d x_j~e_{ij} |psi_j (x_j, t)|^2.

References

Authored by X.Huang

* X. Huang, “Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization”, accepted by IEEE Information Theory Workshop, Chengdu, China, 2006 (also available online [http://arxiv.org/abs/cs/0609088 here] ).

* X. Huang, ``The cooperative optimization metaheuristic: Inspiration from nature and applications," Computational Intelligence, ICIC 2005, Springer-Verlag, LNAI 4114, 2006, pp. 1246&ndash;1251.

* X. Huang, ``A general extension of constraint propagation for constraint optimization," Principles of Practice of Constraint Programming &mdash; CP 2004, Spinger-Verlag, LNCS 3258, 2004, pp. 737&ndash;741.

* X. Huang, ``Near perfect decoding of LDPC codes," Proceedings of IEEE International Symposium on Information Theory (ISIT), 2005, pp. 302&ndash;306 (also available online [http://arxiv.org/abs/cs/0504021 here] ).

* X. Huang, “A New Kind of Hopfield Networks for Finding Global Optimum”, International Joint Conference on Neural Networks, Montreal, Canada, 2005, pp.764&ndash;769 (also available online [http://arxiv.org/abs/cs/0505003 here] ).

* X. Huang, ``Cooperative optimization for solving large scale combinatorial problems," Theory and Algorithms for Cooperative Systems, Series on Computers and Operations Research, World Scientific, 2004, pp. 117&ndash;156 (ISBN 981-256-020-3).

* X. Huang, ``Image segmentation by cooperative optimization," IEEE International Conference on Image Processing (ICIP), Singapore, 2004, pp. 945&ndash;948.
* X. Huang, ``Cooperative optimization for energy minimization in computer vision: A case study of stereo matching," Pattern Recognition, 26th DAGM Symposium, Springer-Verlag, LNCS 3175, 2004, pp. 302&ndash;309.

* X. Huang, ``A general framework for constructing cooperative global optimization algorithms," Frontiers in Global Optimization, Nonconvex Optimization and Its Applications, Kluwer Academic Publishers, 2004, pp. 179&ndash;221 (ISBN 1-4020-7699-1).

* X. Huang, “A Polynomial Time Algorithm for Solving NP-hard Problems in Practice”, ACM SIGACT Volume 34, Issue 1, March 2003, pp. 101&ndash;108.

* X. Huang, “A Cooperative Search Approach for Global Optimization”, Oral Presentation at the First International Conference on Optimization Methods and Software, Hangzhou, China, 2002.

Jointly authored with X.Huang

* Ding Suquan, Huang Xiaofei, Yang Zhixing (2007) "Iterative soft-decision decoding of Reed-Solomon codes based on cooperative optimization algorithm", CHINESE HIGH TECHNOLOGY LETTERS 2007 Vol.17 No.12 P. 1234&ndash;1237.

* HUANG Xiao-fei, DING Su-quan, YANG Zhi-xing, WU You-shou (2006) "Decoding of Low-density Parity-check (LDPC) Codes over GF("q") and Their Applications in Deep Space Communication", "Journal of Spacecraft TT & C Technology"

Other authors

* Belén Melián Batista, José A. Moreno Pérez, J. Marcos, Moreno Vega (2006) "Nature-inspired Decentralized Cooperative Metaheuristic Strategies for Logistic Problems", NiSIS 2006, European Symposium on Nature-inspired Smart Information Systems.

* Zeng-Fu Wang, Zhi-Gang Zheng "A Region Based Stereo Matching Algorithm Using Cooperative Optimization"


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Cooperative coevolution — (CC) is an emerging field in evolutionary computation which divides a larger problem into smaller subcomponents and solves those subcomponents independently in order to solve the larger problem[1]. The subcomponents are also called species. The… …   Wikipedia

  • Cooperative distributed problem solving — is a network of semi autonomous processing nodes working together to solve a problem, typically in a multi agent system. That is concerned with the investigation of problem subdivision, sub problem distribution, results synthesis, optimisation of …   Wikipedia

  • Cooperative game — This article is about a part of game theory. For video gaming, see Cooperative gameplay. For the similar feature in some board games, see cooperative board game In game theory, a cooperative game is a game where groups of players ( coalitions )… …   Wikipedia

  • Ant colony optimization algorithms — Ant behavior was the inspiration for the metaheuristic optimization technique. In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be… …   Wikipedia

  • Ant Colony Optimization — Algorithme de colonies de fourmis Les algorithmes de colonies de fourmis sont des algorithmes inspirés du comportement des fourmis et qui constituent une famille de métaheuristiques d’optimisation. Initialement proposé par Marco Dorigo et al.… …   Wikipédia en Français

  • Ant colony optimization — The ant colony optimization algorithm (ACO), introduced by Marco Dorigo in 1992 in his PhD thesis, is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. They are inspired by the …   Wikipedia

  • Distributed constraint optimization — (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • List of mathematics articles (C) — NOTOC C C closed subgroup C minimal theory C normal subgroup C number C semiring C space C symmetry C* algebra C0 semigroup CA group Cabal (set theory) Cabibbo Kobayashi Maskawa matrix Cabinet projection Cable knot Cabri Geometry Cabtaxi number… …   Wikipedia

  • Dynamic spectrum management — (DSM), also referred to as dynamic spectrum access (DSA), is a set of techniques based on theoretical concepts in network information theory and game theory that is being researched and developed to improve the performance of a communication… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”