Slaman proposes to investigate the effective, and more generally definable, aspects of mathematical phenomena such as genericity, compactness, and randomness. One central question in this investigation is to give necessary and suffcient conditions on an infinite binary sequence X which ensure that there is a continuous measure m such that X is effectively or arithmetically random relative to m. This is a classic mathematical problem, given an individual data set determine a distribution which would generate it. By results of Reimann and Slaman, for all but countably many X there is such an m. The argument is highly meta-mathematical. Necessarily so, as Reimann and Slaman have also shown that this co-countability theorem cannot be proven without invoking infinitely many iterations of the power set of the reals. In the emerging picture, there is a close interaction between a sequence's failure to have a random ingredient and it's being structurally definable, which should be studied more deeply.
Slaman's proposal can be viewed in the context of the continuing investigation of computability and mathematical definability. With quantitative mathematical analysis of these phenomena, one can answer questions of the form ``Is there an algorithm to solve all problems of a this type?'', ``Is there a simple example with specific properties'', ``Is there a concrete classification of all structures with these properties?''. One can also address questions of the sort ``Are these techniques adequate to resolve this question?'' or ``How random must a sequence be in order to exhibit a particular typical behavior?'' One must develop a detailed theory of computation to show that there is no algorithm of a certain type. Similarly, one must develop a detailed theory of definability to show that there is no simple example with certain properties or to show that certain phenomena do not have concrete classifications.
Slaman works in the general area of Mathematical Logic. He studies aspects mathematical objects and methods which are tied to intrinsic complexity, such as a set's being non-computable or a theorem's being unprovable by certain means. Naturally, investigations from this perspective yield understanding of mathematical structures and the contexts necessary to analyze them. They also suggest approaches, and even solutions, to questions which are not foundational. We will give an example of each type. First, we consider how infinitary methods can be applied to finite sets. Infinitary combinatorics, especially Ramsey Theory, provides a rich set of examples. Surprisingly, there are relatively few instances for which it is known that these theorems do not have more elementary proofs which involve only finite sets and their elementary properties, possibly analyzed with great ingenuity and sophistication. Ramsey’s Theorem for Pairs (RT22), a generalization of the pigeonhole principle, states that for any partition F of the pairs of natural numbers into two parts there is an infinite set A such that all of the pairs taken from A belong to the same part of the partition. This set A is said to be homogeneous for F. RT22 has consequences for finite sets. Namely, for every finite size k there is an M such that for any partition F of the pairs of natural numbers less than M into two parts there is a size-k set A such that all the pairs from A belong to the same part of the partition. The infinite version of Ramsey's Theorem has an essentially non-computable aspect: there is a computable F as above for which there is no infinite computable set which is homogeneous for F. Slaman, in collaboration with Chong Chi Tat and Yang Yue, showed that its set of finitary consequences has interesting meta-mathematical features, too. Namely, by earlier results of Cholak, Jockusch and Slaman, it includes everything provable from a known principle Sigma_2-bounding, but by Chong, Slaman and Yang, it is strictly weaker than Sigma_2-induction, the standard immediate strengthening of Sigma_2-bounding. Chong, Slaman and Yang exhibited previously unexplored families of non-standard models of arithmetic which are amenable to making such delicate comparisons. Slaman also collaborated with Ian Haken to apply them to the analogous question about the finitary consequences of the existence of infinite random sequences. Second, Slaman, in collaboration with Veronica Becher and, in part with Pablo Heiber and Yann Bugeaud, applied the methodology of Mathematical Logic to classical issues in Diophantine Approximation. A real number is said to be simply normal in base b if the digits in its expansion in base b are distributed uniformly. For example, a real number is simply normal in base 10 if each digit in its usual decimal expansion occurs one tenth of the time in the limit. A real is normal in base b if it simply normal for every power of b and it is absolutely normal if it is simply normal in every integer base. Normality is one of the trademark characteristics of a random sequence. Borel, who introduced the property of normality in 1909, showed that the set of normal reals has measure one. In the 1930s, Turing gave an algorithm to compute the digits of an absolutely normal number, and also noted that it included an exponential search which made it impractical. Becher, Heiber and Slaman gave a imminently practical algorithm, which runs in essentially quadratic time, to compute an absolutely normal number. It is a highly non-trivial fact, but the only implication between normality in one base and normality in another is the following. For b and c multiplicatively dependent, i.e. both integer powers of the same natural number, a real is normal to base b if and only if it is normal to base c. The first analysis of the multiplicatively independent case was given by Cassels, who showed that a random element of the Cantor middle-third set is normal to every base b such that b is multiplicatively independent from 3. Schmidt then showed that if S and R are subsets of N such that every element of S is multiplicatively independent from every element of R, then there is a real which is normal to every base in S and to no base in R. Becher, Bugeaud and Slaman adapted the methodology of the quadratic-time algorithm for absolute normality to analyze the dependencies for simple normality. This leads to a complete understanding, that is a necessary and sufficient condition on a set of natural numbers M such that there is a real number x for which x is simply normal to base b if and only if b is an element of M.