Computations rarely run in deterministic isolation. They interact with users and adversaries, with random events called by algorithms or generated by the context, with delays and glitches from the system, hardware, and distributed infrastructure, etc. Some of these interactions are hard to model, but even those with straightforward mathematical models are often very hard to analyze.

Randomness and non-determinism are two basic "freedoms" branching out of the concept of deterministic computation which play a crucial role in computing theory. Yet our understanding of their role and power is minimal. Even a gradual progress in understanding these phenomena and their relationship to each other and to other concepts would be important.

An example of achievements in this direction is the discovery of generic relationship between one-way functions and deterministic generation of randomness. Another is the concept of transparent (also called holographic, or PCP) proofs and computations.

A number of interesting techniques useful for quite different results in these areas have been accumulated: low-degree polynomials and Fourier transforms over low-periodic groups, related to classical results on error-correcting codes and hashing, expander graphs, hierarchic structures, etc. The research is to continue PI's investigation of such concepts and of the power of these and other related techniques.

Symmetry is one of the central phenomena in many fields. In computations it can simplify analysis, provide uniformity and redundancy useful, e.g., for error-correction. On the other hand, it can cause indecisiveness, deadlocks and complicate initialization and organization of computing processes. Breaking symmetries is as essential a task as maintaining them. A study of a number of mathematical and algorithmic tools useful for symmetry breaking is planned. Examples include Thue sequences, aperiodic tilings, extensions of the concept of flat connections from manifolds to graphs, and others.

In prior work, the P.I. has made major contributions to our understanding of randomness, nondeterminism, complexity, and symmetry breaking. This project will advance our understanding of those areas.

Project Report

Today's computing environment is a soup of more or less deterministic pieces blended with multitude of hard-to-model unpredictables, all sorts of agents, mismatched interests, etc., that bears no resemblance of Turing Machines. The Founders' notion of computation lost some of its relevance in this ocean. What information that Turing Machines cannot generate, Wikipedia can? What it cannot? I consider a much more general concept of what computational tasks are solvable and which are not. I give examples where these concepts can be applied, while classical concepts cannot. Considering space limitations, here is just one example (developed with my student Samuel Epstein): Many intellectual and computing tasks require guessing the hidden part of the environment from available observations. In different fields these tasks have various names, such as Inductive Inference, Extrapolation, Passive Learning, etc. The relevant part of the environment (including all data that interest the guesser or might influence the guess) can be represented as an, often huge, string x. The known observations restrict it to a (typically enormous) set D. One popular approach to guessing, the "Occam Razor", tells to focus on the simplest members of D. (In words, attributed to A. Einstein, "A conjecture should be made as simple as it can be, but no simpler.") Yet, the justification of this principle seems rather mysterious. A more revealing philosophy is based on the idea of "Prior". It assumes the guessing of x in D is done by restricting to D an a priori probability distribution on all strings. The less we know about x (before observations restricting x to D) the more "spread" is the prior, i.e. the smaller would be the variety of sets that can be ignored due to their negligible probability. This means that distributions truly prior to any knowledge, would be the largest up to constant factors. Among enumerable (i.e. generated by randomized algorithms on their outputs) distributions, such largest prior does in fact exist and is roughly exponentially decreasing with the complexity of x. These ideas developed by Solomonoff and many subsequent papers do remove some mystery from the Occam Razor principle. Yet, they immediately yield a reservation: the simplest objects have EACH the highest universal probability, but it may still be negligible compared to the COMBINED probability of complicated objects in D. This suggests that the general inference situation may be much more obscure than the widely believed Occam Razor principle describes it. In a rather surprising development that greatly simplifies the general situation, we showed this could not happen, except as a purely mathematical construction. Any such D has high information I(D:H) about the Halting Problem H. So, there are no ways to generate such D as per an informational version of Church-Turing Thesis I mentioned in the first paragraph and discuss in another paper.

Project Start
Project End
Budget Start
2010-09-01
Budget End
2012-08-31
Support Year
Fiscal Year
2010
Total Cost
$99,999
Indirect Cost
Name
Boston University
Department
Type
DUNS #
City
Boston
State
MA
Country
United States
Zip Code
02215