Distributed constraint optimization

Distributed constraint optimization

Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is either minimized or maximized.

Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents.

Problems defined with this framework can be solved by any of the algorithms that are proposed for it.

The framework was used under different names in the 1980s. The first known usage with the current name is in 1990.

Contents

Definitions

DCOP

A DCOP can be defined as a tuple \langle \phi A, V, \mathfrak{D}, f, \alpha, \tau \rangle, where:

  • R is a set of agents;
  • T is a set of variables, \{v_1,v_2,\cdots,v_{|V|}\};
  • \mathfrak{D} is a set of domains, \{D_1, D_2, \ldots, D_{|V|}\}, where each D \in \mathfrak{D} is a finite set containing the values to which its associated variable may be assigned;
  • f is function[1][2]
    f : \bigcup_{S \in \mathfrak{P}(V)}\sum{v_i \in S} \left( \{v_i\} \times D_i \right) \leftarrow \mathbb{N} \cup \{\infty\}
    that maps every possible variable assignment to a cost. This function can also be thought of as defining constraints between variables however the variables must not Hermatian;
  • γ is a function \beta: V \leftarrow A mapping variables to their associated agent. \alpha(v_i) \mapsto a_j implies that it is agent aj's responsibility to assign the value of variable vi. Note that it is not necessarily true that α is either an injection or surjection; and
  • η is an operator that aggregates all of the individual f costs for all possible variable assignments. This is usually accomplished through summation:
    \eta(f) \mapsto \sum_{s \in \bigcup_{S \in \mathfrak{P}(V)}\sum{v_i \in S} \left( \{v_i\} \times D_i \right)} f(s).

The objective of a DCOP is to have each agent assign values to its associated variables in order to either minimize or maximize η(f) for a given assignment of the variables.

Context

A Context is a variable assignment for a DCOP. This can be thought of as a function mapping variables in the DCOP to their current values:

t : V \leftarrow (D \in \mathfrak{D}) \cup \{\emptyset\}.

Note that a context is essentially a partial solution and need not contain values for every variable in the problem; therefore, t(v_i) \mapsto \emptyset implies that the agent α(vi) has not yet assigned a value to variable vi. Given this representation, the "domain" (i.e., the set of input values) of the function f can be thought of as the set of all possible contexts for the DCOP. Therefore, in the remainder of this article we may use the notion of a context (i.e., the t function) as an input to the f function.

Example problems

Distributed graph coloring

The graph coloring problem is as follows: given a graph G = \langle N, E \rangle and a set of colors C, assign each vertex, n\in N, a color, c\in C, such that the number of adjacent vertices with the same color is minimized.

As a DCOP, there is one agent per vertex that is assigned to decide the associated color. Each agent has a single variable whose associated domain is of cardinality | C | (there is one domain value for each possible color). For each vertex n_i \in N, create a variable in the DCOP v_i \in V with domain Di = C. For each pair of adjacent vertices \langle n_i, n_j \rangle \in E, create a constraint of cost 1 if both of the associated variables are assigned the same color:

(\forall c \in C : f(\langle v_i, c \rangle, \langle v_j, c \rangle ) \mapsto 1).

The objective, then, is to minimize η(f).

Distributed multiple knapsack problem

The distributed multiple- variant of the knapsack problem is as follows: given a set of items of varying volume and a set of knapsacks of varying capacity, assign each item to a knapsack such that the amount of overflow is minimized. Let I be the set of items, K be the set of knapsacks, s : I \leftarrow \mathbb{N} be a function mapping items to their volume, and c : K \leftarrow \mathbb{N} be a function mapping knapsacks to their capacities.

To encode this problem as a DCOP, for each i \in I create one variable v_i \in V with associated domain Di = K. Then for all possible context t:

f(t) \mapsto \sum_{k \in K} \begin{cases}
                    0&  r(t,k) \leq c(k),\\
                    r(t,k)-c(k) & \text{otherwise},
                  \end{cases}

where r(t,k) is a function such that

r(t,k) = \sum_{v_i \in t^{-1}(k)} s(i).

Algorithms

DCOP algorithms can be classified according to the search strategy (best-first search or depth-first branch-and-bound search), the synchronization among agents (synchronous or asynchronous), the communication among agents (point-to-point with neighbors in the constraint graph or broadcast) and the main communication topology (chain or tree).[3] ADOPT, for example, uses best-first search, asynchronous synchronization, point-to-point communication between neighboring agents in the constraint graph and a constraint tree as main communication topology.

Algorithm Name Year Introduced Memory Complexity Number of Messages Correctness/
Completeness
Implementations
NCBB
No-Commitment Branch and Bound[4]
2006 Polynomial (or any-space[5]) Exponential Proven Reference Implementation: not publicly released

DCOPolis (GNU LGPL)

DPOP
Distributed Pseudotree Optimization Procedure[6]
2005 Exponential Linear Proven Reference Implementation: FRODO (GNU Affero GPL)

DCOPolis (GNU LGPL)

OptAPO
Asynchronous Partial Overlay[7]
2004 Polynomial Exponential Proven, but proof of completeness has been challenged[8] Reference Implementation: OptAPO

DCOPolis (GNU LGPL); In Development

Adopt
Asynchronous Backtracking[9]
2003 Polynomial (or any-space[10]) Exponential Proven Reference Implementation: Adopt

DCOPolis (GNU LGPL)
FRODO (GNU Affero GPL)

Secure Multiparty Computation For Solving DisCSPs
(MPC-DisCSP1-MPC-DisCSP4)[citation needed]
2003 [citation needed] [citation needed] Note: secure if 1/2 of the participants are trustworthy [citation needed]
Secure Computation with Semi-Trusted Servers[citation needed] 2002 [citation needed] [citation needed] Note: security increases with the number of trustworthy servers [citation needed]
ABTR[citation needed]
Asynchronous Backtracking with Reordering
2001 [citation needed] [citation needed] Note: eordering in ABT with bounded nogoods [citation needed]
DMAC[citation needed]
Maintaining Asynchronously Consistencies
2001 [citation needed] [citation needed] Note: the fastest algorithm [citation needed]
AAS[citation needed]
Asynchronous Aggregation Search
2000 [citation needed] [citation needed] aggregation of values in ABT [citation needed]
DFC[citation needed]
Distributed Forward Chaining
2000 [citation needed] [citation needed] Note: low, comparable to ABT [citation needed]
DBA
Distributed Breakout Algorithm
1995 [citation needed] [citation needed] Note: incomplete but fast FRODO version 1
AWC[citation needed]
Asynchronous Weak-Commitment
1994 [citation needed] [citation needed] Note: reordering, fast, complete (only with exponential space) [citation needed]
ABT[citation needed]
Asynchronous Backtracking
1992 [citation needed] [citation needed] Note: static ordering, complete [citation needed]

Hybrids of these DCOP algorithms also exist. BnB-Adopt[3], for example, changes the search strategy of Adopt from best-first search to depth-first branch-and-bound search.

See also

Notes and references

  1. ^ "\mathfrak{P}(\mathfrak{V})" denotes the power set of V
  2. ^ "\times" and "\sum" denote the Cartesian product.
  3. ^ a b Yeoh, William; Felner, Ariel; Koenig, Sven (2008), "BnB-ADOPT: An Asynchronous Branch-and-Bound DCOP Algorithm", Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 591–598, http://idm-lab.org/bib/abstracts/Koen08d.html 
  4. ^ Chechetka, Anton; Sycara, Katia (May 2006), "No-Commitment Branch and Bound Search for Distributed Constraint Optimization", Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1427–1429, http://www.ri.cmu.edu/pub_files/pub4/chechetka_anton_2006_2/chechetka_anton_2006_2.pdf 
  5. ^ Chechetka, Anton; Sycara, Katia (March 2006), "An Any-space Algorithm for Distributed Constraint Optimization", Proceedings of the AAAI Spring Symposium on Distributed Plan and Schedule Management, http://www.ri.cmu.edu/pub_files/pub4/chechetka_anton_2006_1/chechetka_anton_2006_1.pdf 
  6. ^ Petcu, Adrian; Faltings, Boi (August 2005), "DPOP: A Scalable Method for Multiagent Constraint Optimization", Proceedings of the 19th International Joint Conference on Artificial Intelligence, IJCAI 2005, Edinburgh, Scotland, pp. 266-271, http://liawww.epfl.ch/cgi-bin/Pubs/single_entry?bibtex_key=Petcu2005 
  7. ^ Mailler, Roger; Lesser, Victor (2004), "Solving Distributed Constraint Optimization Problems Using Cooperative Mediation", Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, IEEE Computer Society, pp. 438–445, ftp://mas.cs.umass.edu/pub/mailler/mailler-569.pdf 
  8. ^ Grinshpoun, Tal; Zazon, Moshe; Binshtok, Maxim; Meisels, Amnon (2007), "Termination Problem of the APO Algorithm", Proceedings of the Eighth International Workshop on Distributed Constraint Reasoning, pp. 117–124, http://liawww.epfl.ch/Publications/Archive/DCR07Proceedings.pdf 
  9. ^ The originally published version of Adopt was uninformed, see Modi, Pragnesh Jay; Shen, Wei-Min; Tambe, Milind; Yokoo, Makoto (2003), "An asynchronous complete method for distributed constraint optimization", Proceedings of the second international joint conference on autonomous agents and multiagent systems, ACM Press, pp. 161–168, http://teamcore.usc.edu/papers/2003/modi-aamas03.pdf . The original version of Adopt was later extended to be informed, that is, to use estimates of the solution costs to focus its search and run faster, see Ali, Syed; Koenig, Sven; Tambe, Milind (2005), "Preprocessing Techniques for Accelerating the DCOP Algorithm ADOPT", Proceedings of the fourth international joint conference on autonomous agents and multiagent systems, ACM Press, pp. 1041–1048, http://teamcore.usc.edu/papers/2005/aamas-paper.pdf . This extension of Adopt is typically used as reference implementation of Adopt.
  10. ^ Matsui, Toshihiro; Matsuo, Hiroshi; Iwata, Akira (February), "Efficient Method for Asynchronous Distributed Constraint Optimization Algorithm", Proceedings of Artificial Intelligence and Applications, pp. 727–732, http://www.matlab.nitech.ac.jp/~matsuo/AIA05-1.pdf 

Books and Surveys

  • Meisels, Amnon (2008), Distributed Search by Constrained Agents, Springer, ISBN 978-1-848-00040-7 
  • Yokoo, Makoto (2001), Distributed constraint satisfaction: Foundations of cooperation in multi-agent systems, Springer, ISBN 978-3-540-67596-9 
  • Yokoo, M., and Hirayama, K. (2000). Algorithms for distributed constraint satisfaction: A review. Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 185–207). A survey.

Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Constraint optimization — In constraint satisfaction, constrained optimization (also called constraint optimization) seeks for a solution maximizing or minimizing a cost function. Contents 1 Definition 2 Solution methods 2.1 Branch and bound …   Wikipedia

  • Cooperative distributed problem solving — is a network of semi autonomous processing nodes working together to solve a problem, typically in a multi agent system. That is concerned with the investigation of problem subdivision, sub problem distribution, results synthesis, optimisation of …   Wikipedia

  • Cooperative optimization — is a global optimization method invented by chinese mathematician Xiao Fei Huang, that can solve real world NP hard optimization problems (up to millions of variables) with outstanding performances and unprecedented speeds. It knows whether a… …   Wikipedia

  • Multi-objective optimization — (or multi objective programming),[1][2] also known as multi criteria or multi attribute optimization, is the process of simultaneously optimizing two or more conflicting objectives subject to certain constraints. Multiobjective optimization… …   Wikipedia

  • Multidisciplinary design optimization — Multi disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. As defined by Prof. Carlo Poloni, MDO is the art of finding the best compromise …   Wikipedia

  • DisCSP — Distributed Constraint Satisfaction Problems (DisCSPs) are a form of constraint satisfaction problems.Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct… …   Wikipedia

  • List of mathematics articles (D) — NOTOC D D distribution D module D D Agostino s K squared test D Alembert Euler condition D Alembert operator D Alembert s formula D Alembert s paradox D Alembert s principle Dagger category Dagger compact category Dagger symmetric monoidal… …   Wikipedia

  • CALO — is an artificial intelligence project that attempts to integrate numerous AI technologies into an assistant that learns to help manage your office environment. OverviewCALO is one of the most ambitious artificial intelligence projects in US… …   Wikipedia

  • DCOP — For the use of this acronym as it pertains to Artificial Intelligence, see Distributed constraint optimization. DCOP, which stands for Desktop COmmunication Protocol, is a light weight interprocess and software componentry communication system.… …   Wikipedia

  • Memetic algorithm — Memetic algorithms (MA) represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population based approach with separate individual learning or local… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”