The two most fundamental notions in effective randomness are Kolmogorov complexity and Martin-Löf randomness. The first measures the information content of finite binary strings, while the second captures the intuition that a random infinite binary sequence has no "distinguishing features". Both notions are rooted in computability theory, so it is natural that the study of randomness draws methods and ideas from computability theory, as well as classical mathematical subjects like measure theory and information theory. On the other hand, ideas from effective randomness have fed back into computability theory and even reverse mathematics, so there is cross-fertilization between effective randomness and other fields. Moreover, the theory of effective randomness has proved to be a rich field of study in its own right. Miller proposes to study the interaction between effective dimension and computational power, the interaction between degrees of randomness and computational power, triviality and lowness notions, and the strength of non-monotonic betting strategies.
A sequence of zeros and ones is considered random if the shortest (binary) computer program that generates the sequence has essentially the same length as the sequence itself. Intuitively, a random sequence has no patterns that can be exploited to give a compressed description. For example, the sequence consisting of a million zeros can be generated easily by a short program, so it is not random. On the other hand, there is a high probability that a sequence generated by flipping a coin a million times (tails is zero, heads is one) will be random. The length of the shortest program for a sequence is called the Kolmogorov complexity of the sequence; this notion was introduced in the 1960s. An infinite sequence is considered random if all of its finite initial segments are sufficiently random. This is equivalent to saying that no (semi-)computable betting strategy can win money trying to predict the digits of the sequence. The proposed research will address fundamental questions about randomness. Some are speculative, such as: "What does it mean to say that one sequence is more random that another?" This question has lead to difficult technical problems. It has motivated much of Miller's work on the initial segment complexity of Martin-Löf randoms and has recently led to the formulation of a notion that seems to come closer to providing a satisfactory answer. On the other hand, technical work often reveals higher level patterns. For example, there is a growing body of evidence supporting the assertion: "More random sequences are less useful as oracles and computationally useless random sequences are automatically more random." This ties back into the question about what it means to be "more random". Other basic questions, such as "To what extent can we distill information out of a semi-random source?" have lead to interesting work but have not been exhausted on a technical level. Still others, like "How powerful are betting strategies that are allowed to bet on the digits of a sequence in any order?" remain largely open, despite concentrated effort. These are all fundamental questions that have proved to have interesting technical aspects.
This project involved work on many aspects of computability theory, a branch of mathematical logic, but the main focus was on the study of the random sequences. A finite string of zeros and ones is considered random if the shortest (binary) computer program that generates the string has essentially the same length as the string itself. Intuitively, a random string has no patterns that can be exploited to give a compressed description. For example, the string consisting of a million zeros can be generated easily by a short program, so it is not random. On the other hand, there is a high probability that a string generated by flipping a coin a million times (tails is zero, heads is one) will be random. The length of the shortest program for a string is called the Kolmogorov complexity of the string; this notion was introduced in the 1960s. An infinite sequence of zeros and ones is considered random if all of its finite initial segments are sufficiently random. On special focus of this project was on the interaction between the random sequences and the K-trivial sequences. The K-trivial sequences are as far from random as possible in the sense that their initial segments are maximally compressible. The class of K-trivial sequences has many other natural characterizations and they have been central in the development of effective randomness. Working with others, I have answered key questions about K-triviality. For example, we showed that K-trivial sequences can be characterized as those sequences that do not help random sequences compute the halting problem, answering a question of Ku?era. One of the necessary developments that let us resolve these questions was an effective analysis of the Lebesgue density theorem, a direction of research that I helped establish near the beginning of the project period. The relationship between K-triviality and Lebesgue density was unexpected. It brought together two seemingly distinct aspects in the study of randomness: purely computability-theoretic questions about K-triviality and the study of the effective content of classical theorems from analysis and measure theory. My student, Mushfeq Khan, was also supported by this grant, including being funded for three semesters and three summers. This support helped him complete his Ph.D. He is now a postdoctoral fellow at the university of Hawaii.