Abstract

The study of probabilistically verifiable proofs originated in the mid 1980s with the introduction of Interactive Proof Systems (IPs). The primary focus of research in this area in the '80s has been twofold: the role of zero-knowledge interactive proofs within cryptographic protocols, and characterizing which languages are efficiently interactively provable. In the 1990s, the focus of research on the topic shifted. Extensions of the interactive proof model, such as Multiprover Interactive Proofs (MIPs) and Probabilistically Checkable Proofs (PCPs), were considered with the intention of expanding our notion of what should be considered efficiently verifiable. In addition, researchers have taken a closer look at the exact resources (and tradeoffs amongst them) needed to verify a proof using various proof systems. This culminated in the important discovery that it is possible to verify NP statements (with a constant error probability) by only examining a constant number of bits of a PCP and using logarithmic amount of randomness. Perhaps, however, the most dramatic development has been the connection which was found between probabilistically verifiable proofs and proving hardness of approximation for optimization problems. It has been shown that a large variety of optimization versions of NP-hard problems (e.g., the maximum size of a clique in a graph, the minimum number of colors necessary to color a graph, and the maximum number of clauses satisfiable in a CNF formula) are not only NP-hard to solve exactly but also NP-hard to approximate in a very strong sense. The tools to establish hardness of approximation came directly from results on MIPs and PCPs. Indeed, almost every improvement in the efficiency of these proof systems translates directly into showing larger factors within which these optimization problems are hard to approximate. In 1994--1995 two exciting workshops were held at the Weizmann Institute in Israel on the new developments in probabilistically verifiable proofs and their applications to approximation problems, cryptography, program checking, and complexity theory at large. Over 60 papers were presented in the workshop series, and we are proud to include three of them in this special section. "On the Power of Finite Automata with Both Nondeterministic and Probabilistic States" by Anne Condon, Lisa Hellerstein, Samuel Pottle, and Avi Wigderson, considers constant round interactive proof systems where the verifier is restricted to use constant space and public coins. An equivalent characterization is finite automata with both nondeterministic and random states (npfa's), which accept their languages with a small probability of error. The paper shows that npfa's restricted to run in polynomial expected time accept only the regular languages in the case of npfa with 1-way input head, and that if Lis a nonregular language, then either L or its complement is not accepted by any npfa with a 2-way input head. "A Parallel Repetition Theorem" by Ran Raz, addresses and resolves the Parallel Repetition Conjecture which has eluded researchers for some time. The broader topic is what happens to the error probability of proof systems when they are composed. It has been known for awhile that sequential composition of proof systems (both single and multiprover interactive proofs) reduces the error exponentially, but this increases the number of rounds. For interactive proof systems, parallel repetition is known to reduce the error exponentially, and the Parallel Repetition Conjecture asserts that the same holds in a one-round two-prover proof system. Raz proves a constructive bound on the probability of error which indeed reduces at an exponential rate. The constant in the exponent is logarithmic in the total number of possible answers of the two provers, which means one can achieve two-prover one-round MIPs for NP statements with arbitrarily small constant error probability. This, in turn, has played a crucial role in further developments in the area and in particular in those reported in the next paper. "Free Bits, PCPs, and Nonapproximability---Towards Tight Results" by Mihir Bellare, Oded Goldreich, and Madhu Sudan, continues the investigation of PCPs and nonapproximability with emphasis on trying to get closer to tight results. The work consists of three parts. The first part presents several PCP proof systems for NP, based on a new error-correcting code called the Long Code. The second part shows that the connection between PCPs and hardness of approximation is not accidental. In particular, it shows that the transformation of a PCP for NP into hardness results for MaxClique can be reversed. Finally, the third part initiates a systematic investigation of the properties of PCPs as a function of the various parameters: randomness, query complexity, free-bit complexity, amortized free-bit complexity, proof size, etc. Two more papers submitted for this special section were not ready at this time for publication. They will appear in future issues of the SIAM Journal on Computing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call