This project focuses on problems in computational complexity theory, with the goal of clarifying the power and limitations of various important classes of algorithms (known as ``complexity classes''). Complexity classes provide the best tools currently available for understanding the computational complexity of real-world computational problems.
Recent progress in the field has given rise to (cautious) optimism that routes can be found that avoid some of the known barriers to proving lower bounds on the circuit size required to compute various functions, by capitalizing on the property of ``strong downward self-reducibility''. For problems that possess this property, modest lower bounds can be ``amplified'' to obtain superpolynomial lower bounds. This project aims to investigate the power and applicability of this new approach. In parallel, we will investigate whether certain well-studied proof systems are incapable of proving even the modest lower bounds that would be required in order to bootstrap this ``amplification'' procedure. The project also will investigate other approaches to proving conditional circuit lower bounds.
The project will also aim to build on the recent discovery that that the ``counting hierarchy'' (a subclass of PSPACE) is able to perform a large class of numerical computations, and thus contains some complexity classes related to computation over the real field, as well as capturing the complexity of some fundamental problems in numerical analysis. There are several important and seemingly-related problems that are still not known to lie inside the counting hierarchy; this project will investigate whether these problems also are in the counting hierarchy.
Programmers and circuit designers have demonstrated great ingenuity in finding clever and efficient solutions to many computational problems, but some problems seem extremely resistant to solution. How can one prove that (some of) these problems can not be solved efficiently, no matter how clever one is? This is the task of computational complexity theory, and there is a growing consensus among mathematicians that the underlying task of complexity theory is one of the most difficult challenges in all of mathematics. The research supported by this grant found one novel strategy to prove that certain problems require very large circuits: For a number of well-studied and important computational problems, it was shown that complexity bounds can be "amplified", so that very large bounds can be obtained if one is first able to show that the problem requires circuits of size just slightly larger than the input size. [Later in this report, we shall refer to this as a "pathetic" lower bound.] That is, if one can show that these problems are not entirely trivial, then one can conclude that they are indeed extremely complex. This might be seen as a positive development (since it provides a new avenue for showing that certain problems are complex), but it might also be a discouraging sign that even pathetic complexity bounds will be very difficult to obtain. Subsequently, this investigation was able to build on the above-mentioned work to show that if a "pathetic" lower bound on the complexity of one of these problems could be proven (showing that the problem is not completely trivial), then this would enable us to present fairly efficient deterministic simulations of a class of probabilistic algorithms. "Derandomization" results of this form had been known previously, but only based on the existence of very strong lower bounds on the complexity of problems. Derandomization techniques also turned out to be quite useful in forging new links between computational complexity theory and the field of algorithmic information theory (also known as "Kolmogorov complexity"). Kolmogorov complexity provides a mathematically-rigorous notion of what it means for a bit string to be "random". In a series of papers supported by this grant, theorems were proven that show that the computational power provided by access to this (undecidable) set of "random" strings corresponds quite closely to some important and well-studied complexity classes. In particular, these theorems led to the formulation of the conjecture that BPP (the class of problems that can be solved efficiently by probabilistic algorithms) can be characterized in terms of access to the set of random strings. Such a characterization would provide a surprising link between the fields of computational complexity theory and computability theory. Other investigations supported by the grant yielded (a) a better understanding of constraint satisfaction problems, (b) better bounds on the complexity of various problems in numerical analysis (known collectively as "polynomial time over the reals"), and (c) a demonstration of the limitations of a model of arithmetic computation (known as a bounded-width algebraic branching program). The grant also supported the writing of several survey articles, to help educate a broader community about recent progress in the field. The principal investigator gave several invited plenary lectures at conferences, and also presented talks on complexity theory for a more general audience. Two doctoral students were supported by the grant; one has defended his thesis and the other is nearing completion of his degree.