Laskowski is continuing his research in model theory, which is a branch of mathematical logic. The PI's research is in three areas. It is well known that almost all of the basic notions of first-order model theory are absolute, i.e., their interpretation does not change depending on the model of set theory one is working in. The situation is known to be much more complicated for similar concepts in infinitary languages, where even simple concepts such as categoricity in power can fail to be absolute, even for cardinal preserving forcings. A few years ago, it was noted that if one places extremely strong stability theoretic conditions on a theory, then these conditions limit the quantifier complexity of the elementary diagram of any model of the theory. These bounds immediately give upper bounds on the computable complexity of definable subsets of such a theory. The model theoretic notion of definability of types over finite sets is intimately related to the notion of a compression scheme in computational learning theory. This connection has beein fruitful in both directions. A number of examples of concept classes that posess compression schemes have been identified, and conversely this connection has led to a deeper model theoretic understanding of dependent theories.
Model theory is concerned with the interplay between theories, i.e., sets of sentences in a very formal language, and the classes of algebraic structures (models) that satisfy these sentences. There is a well established taxonomy of theories, which is based on the embeddability or non-embeddability of certain configurations of elements into models of the theory. Laskowski has identified certain classes of theories that have direct connections to limitations on data compression and computational learning theory, and will continue his examination of these theories.