Sipser-Lautemann theorem

Sipser-Lautemann theorem

In computational complexity theory, the Sipser-Lautemann theorem or Sipser-Gács-Lautemann theorem states that BPP (Bounded-error Probablistic Polynomial) time, is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2.

In 1983, Michael Sipser showed that BPP is contained in the polynomial time hierarchy. Péter Gács showed that BPP is actually contained in Σ2 ∩ Π2. Clemens Lautemann contributed by giving a simple proof of BPP’s membership in Σ2 ∩ Π2 , also in 1983.

Proof

Michael Sipser's version of the proof works as follows. Without loss of generality, a machine "M" ⊆ BPP with error ≤ 2-|"x"| can be chosen. (All BPP problems can be amplified to reduce the error probability exponentially.) The basic idea of the proof is to define a Σ2 ∩ Π2 sentence that is equivalent to stating that "x" is in the language, "L", defined by "M" by using a set of transforms of the random variable inputs.

Since the output of "M" depends on random input, as well as the input "x", it is useful to define which random strings produce the correct output as "A(x)" = {"r" | "M"("x","r") accepts}. The key to the proof is to note that when "x" ∈ "L", "A(x)" is very large and when "x" ∉ "L", "A(x)" is very small. By using bitwise parity, ⊕, a set of transforms can be defined as "A(x)" ⊕ "t"={"r" ⊕ "t" | "r" ∈ "A(x)"}. The first main lemma of the proof shows that the union of a small finite number of these transforms will contain the entire space of random input strings. Using this fact, a Σ2 sentence and a Π2 sentence can be generated that is true if and only if "x"∈"L" (see corollary).

Lemma 1

The general idea of lemma one is to prove that if "A(x)" covers a large part of the random space R= { 1,0 } ^ > 1 - frac{1}{2^ "such that" cup_i A(x) oplus t_i = R

Proof. Randomly pick "t1,t2,...,t|r|". Let "S"= ∪"i" "A(x)" ⊕ "ti" (the union of all transforms of "A(x)").

So, forall r in R::Pr [r otin S] = Pr [r otin A(x) oplus t_1] cdot Pr [r otin A(x) oplus t_2] cdots Pr [r otin A(x) oplus t_ such that

: cup_i A(x) oplus t_i = R .

Lemma 2

The previous lemma shows that "A(x)" can cover every possible point in the space using a small set of translations. Complementary to this, for "x" ∉ "L" only a small fraction of the space is covered by "A(x)". Therefore the set of random strings causing M(x,r) to accept cannot be generated by a small set of vectors ti.

R = cup_i A(x) oplus t_i

"R" is the set of all accepting random strings, exclusive-or'd with vectors ti.

frac forall r in R. igvee_{ 1 le i le |r (M(r oplus t_i) accepts).

That is, "x" is in language "L" if and only if there exist |"r"| binary vectors, where for all random bit vectors r, TM "M" accepts at least one random vector ⊕ ti.

The above expression is in Σ2 in that it is first existentially then universally quantified. Therefore BPP ∈ Σ2. Because BPP is closed under complement, this proves BPP ∈ Σ2∩Π2

Lautemann's Proof

Here we present the proof (due to Lautemann) that BPP ∈ Σ2. See Trevisan's notes for more information.

Lemma 3

Based on the definition of BPP we define the following:

If L is in BPP then there is an algorithm A such that for every x,

Pr_r(A(x,r) = mbox{right answer}) ge 1 - frac{1}{3m}

where m is the number of random bits |r| = m = |x|^{O(1)} and A runs in time |x|^{O(1)}

Proof: Let A' be a BPP algorithm for L. For every x, Pr_r(A'(x,r) = mbox{wrong answer}) le 1/3. A' uses m'(n) random bits where n = |x|.

Do k(n) repetitions of A' and accept if and only if at least frac{k(n)}{2} executions of A' accept. Define this new algorithm as A. So A uses k(n)m'(n) random bits and Pr_r(A(x,r) = mbox{wrong answer}) le 2^{-ck(n)}. We can then find k(n) with k(n) = heta (log m'(n)) such that frac{1}{2^{ck(n) le frac{1}{3k(n)m'(n)}

Theorem 1

Proof: Let L be in BPP and A as in Lemma 3. We want to show

x in L iff exists y_1,...,y_m in {0,1}^m forall z in {0,1}^m igvee_{i=1}^mA(x,y_i oplus z)=1

where m is the number of random bits used by A on input x.

Given x in L, then

Pr_{y_1,...,y_m}(exists z A(x,y_1 oplus z)=...=A(x,y_m oplus z)=0)

le sum_{z in {0,1}^m} Pr_{y1,...,y_m}(A(x,y_1 oplus z) = ... = A( x, y_m oplus z) = 0)

le 2^m frac{1}{(3m)^m}

< 1.

So

Pr_{y_1,...,y_m}( forall z igvee_i A(x,y_i oplus z))=1 - Pr_{y_1,...,y_m}(exists z A(x,y_1 oplus z)=...=A(x,y_m oplus z)=0)

So (y_1,...,y_m) exists.

Conversely, suppose x otin L. Then

Pr_z left( igvee_i A(x,y_i oplus z) ight) le sum_i Pr_z (A(x,y_i oplus z)=1)

le m frac{1}{3m}

= frac{1}{3}.

So

Pr_z(A(x,y_1 oplus z)=...=A(x,y_m oplus z)=0)= 1 - Pr_z left( igvee_i A(x,y_i oplus z) ight)

ge frac{2}{3} > 0.

So there is a z such that igvee_i A(x,y_i oplus z)=0 for all y_1,...,y_m in {0,1}^m.

References

* M. Sipser. "A complexity theoretic approach to randomness" In Proceedings of the 15th ACM Symposium on Theory of Computing, 330--335. ACM Press, 1983
* C. Lautemann, "BPP and the polynomial hierarchy" Inf. Proc. Lett. 14 215-217, 1983
* Luca Trevisan's Lecture Notes, University of California, Berkeley, http://www.cs.berkeley.edu/~luca/notes/


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Arthur–Merlin protocol — In computational complexity theory, an Arthur–Merlin protocol is an interactive proof system in which the verifier s coin tosses are constrained to be public (i.e. known to the prover too). This notion was introduced by Babai (1985). Goldwasser… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”