IP (complexity)

IP (complexity)

In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. The concept of an interactive proof system was first introduced by Goldwasser, et al. in 1985. An interactive proof system consists of two machines, a prover, P, which presents a proof that a given string n is a member of some language, and a verifier, V, that checks that the presented proof is correct. The prover is assumed to be infinite in computation and storage, while the verifier is a probabilistic polynomial-time machine with access to a random bit string whose length is polynomial on the size of n. These two machines exchange a polynomial number, p(n), of messages and once the interaction is completed, the verifier must decide whether or not n is in the language, with only a 1/3 chance of error. (So any language in BPP is in IP, since then the verifier could simply ignore the prover and make the decision on its own.)More formally:

For any language L, L in IP Rightarrow exists V, P | forall Q, w:

* w in L Rightarrow Pr [V leftrightarrow P accepts w] ge frac{2}{3}
* w ot in L Rightarrow Pr [V leftrightarrow Q accepts w] le frac{1}{3}

The Arthur-Merlin protocol, introduced by Laszlo Babai, is similar in nature, except that the number of rounds of interaction is bounded by a constant rather than a polynomial.

Goldwasser et al have shown that "public-coin" protocols, where the random numbers used by the verifier are provided to the prover along with the challenges, are no more powerful than private-coin protocols. At most two additional rounds of interaction are required to replicate the effect of a private-coin protocol.

In the following section we prove that IP = PSPACE, an important theorem in computational complexity, which demonstrates that an interactive proof system can be used to decide whether a string is a member of a language in polynomial time, even though the traditional PSPACE proof may be exponentially long.


=Proof that IP = PSPACE=

In order to prove that IP and PSPACE are equal, we show that IP is a subset of PSPACE and also that PSPACE is a subset of IP, and hence the two are equivalent. In order to demonstrate that IP subseteq PSPACE, we present a simulation of an interactive proof system by a polynomial space machine. To prove that PSPACE subseteq IP, we show that the PSPACE-complete language TQBF is in IP. Both parts of the proof are adapted from Sipser.

IP is a subset of PSPACE

Let A be a language in IP. Now, assume that on input w with length n, A's verifier V exchanges exactly p=p(n) messages. We now construct a machine M that simulates V and is in PSPACE. To do this, we define our machine as follows:

Pr [V accepts w] = max_P Pr [V leftrightarrow P accepts w]

By the definition of IP, we have Pr [V accepts w] ge frac{2}{3} if w in A and Pr [V accepts w] le frac{1}{3} if w ot in A.

Now, it must be shown that the value can be calculated in polynomial space. Here we take M_j denote to denote this sequence of messages, m_1# ldots #m_j, exchanged by the prover and the verifier, and we generalize the interaction of V and P to start with an arbitrary message stream M_j. We take (V leftrightarrow P)(w,r,M_j) = accept if M_j can be extended with the messages m_{j+1} through m_p such that:

* For j leq i < p, where i is even, V(w, r, M_i) = m_{i+1}
* For j leq i < p, where i is odd, P(w, r, M_i) = m_{i+1}
* The final message m_p in the message history is accept

In other words, when i is even, the verifier sends a message, when it is odd, the prover sends a message, and the final message is to accept. The first two rules ensure that the message sequence is valid, and the third ensures that this message sequence leads to an accept.

Next, further generalizing the earlier definitions, and taking a random string r of length p, we define:

Pr [V leftrightarrow P accepts w starting at M_j] = Pr [(V leftrightarrow P)(w,r,M_j) = accept]

Now, we can define:

Pr [V accepts w starting at M_j] = max_P Pr [V leftrightarrow P accepts w starting at M_j]

and for every 0 leq j leq p and every message history M_j, we inductively define the function N_{M_j}:

N_{M_j} = egin{cases} 0: j = p and m_p = reject\ 1: j = p and m_p = accept\ max_{m_{j+1 N_{M_{j+1: j < p and j is odd\ wt-avg_{m_{j+1 N_{M_{j+1: j < p and j is even\end{cases}

where the term wt-avg_{m_{j+1N_{M_{j+1 is defined as follows:

wt-avg_{m_{j+1N_{M_{j+1 = sum_{m_{j+1(Pr_r [V(w,r,M_j)] )

where Pr_r is the probability taken over the random string r of length p. This expression is the average of N_{M_{j+1, weighted by the probability that the verifier sent message m_{j+1}.

Take M_0 to be the empty message sequence, here we will show that N_{M_0} can be computed in polynomial space, and that N_{M_0} = Pr [V accepts w] . First, to compute N_{M_0}, an algorithm can recursively calculate the values N_{M_j} for every j and M_j.Since the depth of the recursion is p, only polynomial space is necessary. The second requirement is that we need N_{M_0} = Pr [V accepts w] , the value needed to determine whether w is in A. We use induction to prove this as follows.

We must show that for every 0 leq j leq p and every M_j, N_{M_j} = Pr [V accepts w starting at M_j] , and we will do this using induction on j. The base case is to prove for j = p. Then we will use induction to go from p down to 0.

The base case (j = p) is fairly simple. Since m_p is either accept or reject, if m_p is accept, N_{M_p} is defined to be 1 and Pr [V accepts w starting at M_j] = 1 since the message stream indicates acceptance, thus the claim is true. If m_p is reject, the argument is very similar.

For the inductive hypothesis, we assume that for some j + 1 leq p and any message sequence M_{j+1}, N_{M_{j+1 = Pr [V accepts w starting at j+1] and then prove the hypothesis for j and any message sequence M_j.

If j is even, m_{j+1} is a message from V to P. By the definition of N_{M_j}, N_{M_j} = sum_{m_{j+1(Pr_r [V(w,r,M_j)=m_{j+1}] N_{M_{j+1). Then, by the inductive hypothesis, we can say this is equal to sum_{m_{j+1(Pr_r [V(w,r,M_j)=m_{j+1}] * Pr [V accepts w starting at M_{j+1}] ). Finally, by definition, we can see that this is equal to Pr [V accepts w starting at M_j] .

If j is odd, m_{j+1} is a message from P to V. By definition, N_{M_j} = max_{m_{j+1 N_{M_{j+1. Then, by the inductive hypothesis, this equals max_{m_{j+1 * Pr [V accepts w starting at M_{j+1}] . This is equal to Pr [V accepts w starting at M_j] since:

max_{m_{j+1 Pr [V accepts w starting at M_{j+1}] leq Pr [V accepts w starting at M_j]

because the prover on the right-hand side could send the message m_{j+1} to maximize the expression on the left-hand side. And:

max_{m_{j+1 Pr [V accepts w starting at M_{j+1}] geq Pr [V accepts w starting at M_j]

Since the same Prover cannot do any better than send that same message. Thus, this holds whether i is even or odd and the proof that IP subseteq PSPACE is complete.

Here we have constructed a polynomial space machine that uses the best prover P for a particular string w in language A. We use this best prover in place of a prover with random input bits because we are able to try every set of random input bits in polynomial space.Since we have simulated an interactive proof system with a polynomial space machine, we have shown that IP subseteq PSPACE, as desired.

Box

PSPACE is a subset of IP

In order to illustrate the technique that will be used to prove PSPACE subseteq IP, we will first prove a weaker theorem, which was proven by Lund, et al.: #SAT in PSPACE. Then using the conceptsfrom this proof we will extend it to show that TQBF in PSPACE. Since TQBF in PSPACE-Complete, and TQBF in IP then PSPACE subseteq IP.

#SAT is a member of IP

We begin by showing that #SAT in IP, where:

#SAT = { langle phi, k angle mid phi is a cnf-formula with exactly k satisfying assignments }.

Note that this is different from the normal definition of #SAT, in that it is a decision problem, rather than a function.

First we use arithmetization to map the boolean formula with n variables, phi(b_1, b_2, ..., b_n) to a polynomial p_phi(x_1, x_2, ..., x_n), where p_phi mimics phi in that p_phi is 1 if phi is true and 0 otherwise provided that the variables of p_phi are assigned Boolean values. The Boolean operations vee, wedge, and eg used in phi are simulated in p_phi by replacing the operators in phi as shown in the table below.

As an example, phi = a wedge b vee eg c would be converted into a polynomial as follows:

* p_phi = a wedge b vee eg c
* p_phi = a wedge (1 - (1-b)(1-(1-c)))
* p_phi = a (1 - (1-b)c)
* p_phi = a - (ac-abc)

The operations ab and a*b each result in a polynomial with a degree bounded by the sum of the degrees of the polynomials for a and b and hence, the degree of any variable is at most the length of phi.

Now let F be a finite field with order q > 2^n; also demand that q be at least 1000. For each 0leq ileq n, define a function f_i on F, having parameters a_1, ..., a_{i-1}in F, and a single variable a_iin F: For 0 leq i leq n and for a_1, ..., a_i in F let f_i(a_1, ..., a_i) = Sigma _{a_{i+1}, ..., a_n in {0, 1 p(a_1, ..., a_n). Note that the value of f_0 is the number of satisfying assignments of phi. f_0 is a void function, with no variables.

Now the protocol for #SAT works as follows:

* Phase 0:
The prover P choses a prime q > 2^n and computes f, it then sends q and f_0() to the verifier V. V checks that q is a prime greater than max(1000, 2^n) and that f_0() = k.
* Phase 1:
P sends the coefficients of f_1(z) as a polynomial in z. V verifies that the degree of f_1 is less than n and that f_0 = f_1(0) + f_1(1). (If not V rejects). V now sends a random number r_1 from F to P.
* Phase i:
P sends the coefficients of f_i(r_1, ..., r_{i-1}, z) as a polynomial in z. V verifies that the degree of f_i is less than n and that f_{i-1}(r_1, ..., r_{i-1}) = f_i(r_1, ..., r_{i-1}, 0) + f_i(r_1, ..., r_{i-1}, 1). (If not V rejects). V now sends a random number r_i from F to P.
* Phase n+1:
V evaluates p(r_1, ..., r_n) to compare to the value f_n(r_1, ..., r_n). If they are equal V accepts, otherwise V rejects.

Note that this is a public-coin algorithm.

If phi has k satisfying assignments, clearly V will accept. If phi does not have k satisfying assignments we assume there is a prover ilde P that tries to convince V that phi does have k satisfying assignments. We show that this can only be done with low probability.

To prevent V from rejecting in phase 0, ilde P has to send an incorrect value ilde f_0() to P. Then, in phase 1, ilde P must send an incorrect polynomial ilde f_1 with the property that ilde f_1(0)+ ilde f_1(1) = ilde f_0(). When V chooses a random r_1 to send to P, Pr [ ilde f_1(r_1) = f_1(r_1)] < n^{-2}. This is because a polynomial in a single variable of degree at most d can have no more than d roots (unless it always evaluates to 0). So, any two polynomials in a single variable of degree at most d can be equal only in d places. Since |F| > 2^n the chances of r_1 being one of these values is at most n/2^n < n/n^3 if n > 10, or at most n/1000 leq n/n^3 if n leq 10.

Generalizing this idea for the other phases we have for each 1 leq i leq n if ilde f_{i-1}(r_1, ..., r_{i-1}) ot=f_{i-1}(r_1, ..., r_{i-1}), then for r_i chosen randomly from F, Pr [ ilde f(r_1, ..., r_i) = f_i(r_1, ..., r_i)] leq n^{-2}. There are n phases, so the probability that ilde P is lucky because V selects at some stage a convenient r_i is at most 1/n. So, no prover can make the verifier accept with probability greater than 1/n. We can also see from the definition that the verifier V operates in probabilistic polynomial time. Thus, #SAT in IP.

TQBF is a member of IP

In order to show that PSPACE is a subset of IP, we need to choose a PSPACE-Complete problem and show that it is in IP. Once we show this, then it clear that PSPACE subseteq IP. The proof technique demonstrated here is credited to Adi Shamir

We know that TQBF is in PSPACE-Complete. So let psi be a quantified boolean expression:

psi = Q_1 x_1 Q_2x_2...Q_mx_m [phi]

where phi is a CNF formula. Then Q_i is a quantified, either exists or forall. Now f_i is the same as in the previous proof, but now it also includes quantifiers.

f_i(a_1, ..., a_i) = egin{cases}f_i(a_1, a_2,...a_m) = 1~if~ Q_{i+1}x_{i+1}...Q_mx_m [phi(a_1,a_2...a_i)] ~is~true\0~otherwiseend{cases}

Here, phi(a_1,a_2,...,a_i) is phi with a_1 to a_i substituted for x_1 to x_i. Thus f_0() is the truth value of psi. In order to arithmetize psi we must use the following rules:If Q_{i+1} = forall , f_i(a_1,a_2,....a_i) = f_{i+1}(a_1,a_2,...,a_i,0)cdot f_{i+1}(a_1,a_2,...,a_i,1)

If Q_{i+1} = exists, f_i(a_1,a_2,....a_i) = f_{i+1}(a_1,a_2,...,a_i,0) * f_{i+1}(a_1,a_2,...,a_i,1)

where as before we define x * y = 1-(1-x)(1-y).

By using the method described in #SAT, we must face a problem that for any f_i the degree of the resulting polynomial may double with each quantifier. In order to prevent this, we must introduce a new reduction operator R which will reduce the degrees of the polynomial without changing their behavior on Boolean inputs.

So now before we arithmetize psi = Q_1 x_1 Q_2x_2...Q_mx_m [phi] we introduce a new expression:

psi' = Q_1 R_{x_1} Q_2 R_{x_2}...Q_m R_{x_m} [phi]

Or written another way:

psi' = S_1 y_1 S_2 y_2...S_m y_m [phi] , where S_i in { forall ,exists , R}, and y_i in { x_1,x_2,...x_m}

Now for every i leq k we define the function f_i. We also define f_k(x_1,x_2,....x_m) to be the polynomial p(x_1,x_2,...x_m) which is obtained by arithmetizing phi. Now in order to keep the degree of the polynomial low, we define f_i in terms of f_{i+1}:

If S_{i+1} = forall, f_i(a_1,a_2,....a_i) = f_{i+1}(a_1,a_2,....a_i,0)cdot f_{i+1}(a_1,a_2,....a_i,1)

If S_{i+1} = exists, f_i(a_1,a_2,....a_i) = f_{i+1}(a_1,a_2,....a_i,0) * f_{i+1}(a_1,a_2,....a_i,1)

If S_{i+1} = R, f_i(a_1,a_2,....a_i,a) = (1-a)f_{i+1}(a_1,a_2,....a_i,0) + a f_{i+1}(a_1,a_2,....a_i,1)

Now we can see that the reduction operation R, doesn't change the degree of the polynomial. Also it is important to see that the R_x operation doesn't change the value of the function on boolean inputs. So f_0 is still the truth value of psi, but the R_x value produces a result that is linear in x. Also after any Q_ix_i we add R_{x_1}...R_{x_i} in psi' in order to reduce the degree down to 1 after arithmetizing Q_i.

Now let's describe the protocol. If n is the length of psi, all arithmetic operations in the protocol are over a field of size at least n^4 where n is the length of psi.

* Phase 0:P ightarrow V: P sends f_0 to V. V checks that f_0 = 1 and rejects if not.
* Phase 1:
P ightarrow V: P sends f_1(z) to V. V uses coefficients to evaluate f_1(0) and f_1(1). Then it checks that the polynomial's degree is at most n and that the following identities are true:

*f_{0}() = egin{cases} f_{1}(0)cdot f_{1}(1) ~if~ S = forall \f_{1}(0) * f_{1}(1) ~if~ S = exists. end{cases}
* f_{0}() = (1-r)f_{1}(0) + rf_{1}(1) ~if~ S = R.

If either fails then reject.

* Phase i:P ightarrow V: P sends f_i(r_1,r_2...r_{i-1},z) as a polynomial in z. r_1 denotes the previously set random values for r_1,r_2...r_{i-1}

V uses coefficients to evaluate f_i(r_1,r_2...r_{i-1},0) and f_i(r_1,r_2...r_{i-1},1). Then it checks that the polynomial degree is at most n and that the following identities are true:
* f_{i-1}(r_1,r_2...r_{i-1}) = {f_{i}(r_1,r_2...r_{i-1},0)cdot f_{i}(r_1,r_2...r_{i-1},1) ~if~S = forall.
* f_{i}(r_1,r_2...r_{i-1},0) * f_{i}(r_1,r_2...r_{i-1},1) ~if~S = exists.
* f_{i-1}(r_1...r) = (1-r)f_{i}(r_1,r_2...r_{i-1},0) +rf_{i}(r_1,r_2...r_{i-1},1) ~if~ S = R.If either fails then reject.

V ightarrow P: V picks a random r in F and sends it to P. (If S=R then this r replaces the previous r).

Goto phase i+1 where P must persuade V that f_i(r_1,...,r) is correct.

* Phase k+1:
V evaluates p(r_1,r_2,...,r_m). Then it checks if p(r_1,r_2,...,r_m) = f_k(r_1,r_2,....r_m) If they are equal then V accepts, otherwise V rejects.

This is the end of the protocol description.

If psi is true then V will accept when P follows the protocol. Likewise if ilde{P} is a malicious prover which lies, and if psi is false, then ilde{P} will need to lie at phase 0 and send some value for f_0. If at phase i, V has an incorrect value forf_{i-1}(r_1,...) then f_i(r_1,...0) and f_i(r_1,...1) will likely also be incorrect, and so forth. The probability for ilde{P} to get lucky on some random r is at most the degree of the polynomial divided by the field size: n/n^4. The protocol runs through O(n^2) phases, so the probability that ilde{P} gets lucky at somephase is leq frac{1}{n}. If ilde{P} is never lucky, then V will reject at phase k+1.

Since we have now shown that both IP subseteq PSPACE and PSPACE subseteq IP, we can conclude that IP = PSPACE as desired. Moreover, we have shown that any IP algorithm may be taken to be public-coin, since the reduction from PSPACE to IP has this property.

Box

Variants

There are a number of variants of IP which slightly modify the definition of the interactive proof system. We summarize some of the more well-known ones here.

MIP

"Main article: Interactive proof system#MIP"

In 1988, Goldwasser et al. created an even more powerful interactive proof system based on IP called MIP in which there are "two" independent provers. The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier if there is another prover it can double-check with. In fact, this is so helpful that Babai, Fortnow, and Lund were able to show that MIP = NEXPTIME, the class of all problems solvable by a nondeterministic machine in "exponential time", a very large class. Moreover, all languages in NP have zero-knowledge proofs in an MIP system, without any additional assumptions; this is only known for IP assuming the existence of one-way functions.

IPP

IPP ("unbounded IP") is a variant of IP where we replace the BPP verifier by a PP verifier. More precisely, we modify the completeness and soundness conditions as follows:

* Completeness: if a string is in the language, the honest verifier will be convinced of this fact by an honest prover with probability at least 1/2.
* Soundness: if the string is not in the language, no prover can convince the honest verifier that it is in the language, except with probability less than 1/2.

Although IPP also equals PSPACE, IPP protocols behaves quite differently from IP with respect to oracles: IPP=PSPACE with respect to all oracles, while IP &ne; PSPACE with respect to almost all oracles. [ R. Chang, B. Chor, Oded Goldreich, J. Hartmanis, J. Håstad, D. Ranjan, and P. Rohatgi. [http://citeseer.ist.psu.edu/chang97random.html The random oracle hypothesis is false] . "Journal of Computer and System Sciences", 49(1):24-39. 1994. ]

QIP

QIP is a version of IP replacing the BPP verifier by a BQP verifier, where BQP is the class of problems solvable by quantum computers in polynomial time. The messages are composed of qubits. [J. Watrous. [http://citeseer.ist.psu.edu/watrous99pspace.html PSPACE has constant-round quantum interactive proof systems] . "Proceedings of IEEE FOCS'99", pp. 112-119. 1999.] It is not yet known if QIP strictly contains IP (that is, whether quantum computation adds power to interactive proofs), but it is known that QIP = QIP [3] , so that more than three rounds are never necessary. Also, QIP is contained in EXPTIME. [A. Kitaev and J. Watrous. [http://www.cpsc.ucalgary.ca/~jwatrous/papers/qip2.ps Parallelization, amplification, and exponential time simulation of quantum interactive proof systems] . "Proceedings of ACM STOC'2000", pp. 608-617. 2000. ]

compIP

Whereas IPP and QIP give more power to the verifier, a compIP system ("competitive IP proof system") weakens the completeness condition in a way that weakens the prover:

* Completeness: if a string is in the language "L", the honest verifier will be convinced of this fact by an honest prover with probability at least 2/3. Moreover, the prover will do so in probabilistic polynomial time given access to an oracle for the language "L".

Essentially, this makes the prover a BPP machine with access to an oracle for the language, but only in the completeness case, not the soundness case. The concept is that if a language is in compIP, then interactively proving it is in some sense as easy as deciding it. With the oracle, the prover can easily solve the problem, but its limited power makes it much more difficult to convince the verifier of anything. In fact, compIP isn't even known or believed to contain NP.

On the other hand, such a system can solve some problems believed to be hard. In can easily solve all NP-complete problems due to self-reducibility. Additionally, our earlier proof that graph nonisomorphism is in IP also shows that it is in compIP, since the only hard operation the prover ever does is isomorphism testing, which it can use the oracle to solve. Quadratic non-residuosity and graph isomorphism are also in compIP. [ Shafi Goldwasser and Mihir Bellare. [http://www.cs.ucsd.edu/users/mihir/papers/compip.pdf The Complexity of Decision versus Search] . "SIAM Journal on Computing", Volume 23, No. 1. February 1994. ] (Note, Quadratic non-residuosity (QNR) is likely an easier problem than graph isomorphism as QNR is in UP intersect coUP. [Cai JY, Threlfall RA, 2004. "A note on quadratic residuosity and UP." "Information Processing Letters" 92(3): 127-131.]

Additional Sources

* Babai, L. Trading group theory for randomness. In Proceedings of the 17th ACM Symposium on the Theory of Computation . ACM, New York, 1985, pp. 421-429.

* Shafi Goldwasser, Silvio Micali, and Charles Rackoff. [http://portal.acm.org/citation.cfm?id=63434 The Knowledge complexity of interactive proof-systems] . "Proceedings of 17th ACM Symposium on the Theory of Computation", Providence, Rhode Island. 1985, pp. 291-304. [http://theory.lcs.mit.edu/~cis/pubs/shafi/1985-stoc.pdf Extended abstract]

* Shafi Goldwasser and Michael Sipser. [http://theory.lcs.mit.edu/~cis/pubs/shafi/1986-stoc.pdf Private coins versus public coins in interactive proof systems] . "Proceedings of the 18th Annual ACM Symposium on Theory of Computation". ACM, New York, 1986, pp. 59-68.

* Lund, C., Fortnow, L.. Karloff, H., Nisan, N. Algebraic methods for interactive proof systems. In Proceedings of 31st Symposium on the Foundations of Computer Science. IEEE, New York, 1990, pp. 2-90.

* Adi Shamir. [http://portal.acm.org/citation.cfm?doid=146585.146609 IP = PSPACE] . "Journal of the ACM", volume 39, issue 4, p.869-877. October 1992.

* Alexander Shen. [http://doi.acm.org/10.1145/146585.146613 IP=PSpace: Simplified Proof] . J.ACM, v. 39(4), pp. 878-880, 1992.

* Sipser, Michael. "Introduction to the Theory of Computation", Boston, 1997, pg. 392-399.

* Complexity Zoo: [http://qwiki.caltech.edu/wiki/Complexity_Zoo#ip IP] , [http://qwiki.caltech.edu/wiki/Complexity_Zoo#mip MIP] , [http://qwiki.caltech.edu/wiki/Complexity_Zoo#ipp IPP] , [http://qwiki.caltech.edu/wiki/Complexity_Zoo#qip QIP] , [http://qwiki.caltech.edu/wiki/Complexity_Zoo#qip2 QIP(2)] , [http://qwiki.caltech.edu/wiki/Complexity_Zoo#compip compIP] , [http://qwiki.caltech.edu/wiki/Complexity_Zoo#frip frIP]


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Complexity management — is a business methodology that deals with the analysis and optimization of complexity in enterprises. Effects of complexity pertain to all business processes along the value chain and hence complexity management requires a holistic approach.… …   Wikipedia

  • Complexity theory and organizations — Complexity theory and organizations, also called complexity strategy or complex adaptive organization, is the use of Complexity theory in the field of strategic management and organizational studies. Contents 1 Overview 2 Early research 3 Later… …   Wikipedia

  • compLexity — coL Логотип организации Страна …   Википедия

  • Complexity theory and strategy — Complexity theory has been used extensively in the field of strategic management and organizational studies, sometimes called complexity strategy or complex adaptive organization on the internet or in popular press. Broadly speaking, complexity… …   Wikipedia

  • Complexity (journal) —   Discipline Complex Systems …   Wikipedia

  • Complexity, Problem Solving, and Sustainable Societies — is a paper on energy economics by Joseph Tainter from 1996. Contents 1 Focus 1.1 Attempts 1.2 Requirement of knowledge 2 See …   Wikipedia

  • Complexity theory — may refer to: The study of a complex system or complex systems Complexity theory and organizations, the application of complexity theory to strategy Complexity economics, the application of complexity theory to economics Chaos theory, the study… …   Wikipedia

  • compLexity — Kürzel coL Manager Vereinigte Staaten Jason „1“ Lake …   Deutsch Wikipedia

  • Complexity — Com*plex i*ty, n.; pl. {Complexities}. [Cf. F. complexit[ e].] 1. The state of being complex; intricacy; entanglement. [1913 Webster] The objects of society are of the greatest possible complexity. Burke. [1913 Webster] 2. That which is complex;… …   The Collaborative International Dictionary of English

  • complexity — complexity. См. сложность генома. (Источник: «Англо русский толковый словарь генетических терминов». Арефьев В.А., Лисовенко Л.А., Москва: Изд во ВНИРО, 1995 г.) …   Молекулярная биология и генетика. Толковый словарь.

  • complexity — index complication, confusion (turmoil), enigma, entanglement (confusion), imbroglio, impasse …   Law dictionary

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”