The primary focus of this research is the role of randomness in computation. Randomization has proved extremely useful in almost all areas of computer science, such as Monte Carlo simulations, cryptography, distributed computing, and network constructions. These uses of randomness seem wonderful, until one realizes that computers don't have truly random bits available to them. Most computers get their random bits by using pseudo-random generators. Usually, these pseudo-random generators work well in practice. Nevertheless, it is not at all unusual to find instances where they don't. It is therefore important to find generators that are provably good, if such generators exist. Can we always remove the randomness from an efficient randomized algorithm, and be left with an efficient deterministic algorithm that produces the correct answer? This is the general problem, which has different theoretical formulations, depending on the notion of efficiency used. Since polynomial time is the most common measure of efficiency, the most common theoretical formulation is as follows: does randomized polynomial time (BPP) equal deterministic polynomial time (P)? Unfortunately, the only direct progress on the BPP vs. P question has relied on unproven assumptions. This work focuses on two more specific questions. First, can BPP be simulated using a general weak random source that outputs n bits with bounded entropy? Second, does RSPACE(S) = DSPACE(S) or, more realistically with five years, can randomized SPACE(S) algorithms that use more that poly(S) random bits be simulated deterministically in SPACE(S)?

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
9457799
Program Officer
Yechezkel Zalcstein
Project Start
Project End
Budget Start
1994-09-15
Budget End
2000-08-31
Support Year
Fiscal Year
1994
Total Cost
$272,500
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78712