A central mystery in the philosophy of science is "the unreasonable effectiveness of mathematics." Why do natural phenomena seem to have simple, elegant mathematical models? The success of machine learning raises an analogous issue called "the unreasonable effectiveness of machine learning." Is the success of both mathematical science and machine learning due to some common elements in the phenomena studied? Is it due to the nature of the questions we ask? Or is it a mirage due to cognitive bias, in focusing on successful uses and ignoring failures? The aim of this proposal is to better understand this by relating it to a similar phenomenon in mathematics and theoretical computer science, where complex mathematical objects often have simple "models" that give accurate estimates of many quantities of interest. By relating concepts in pure mathematics to analogs in machine learning, the researchers plan to both use powerful machine-learning techniques to prove new mathematical results, and to use mathematical concepts to extend the scope of machine learning.

More precisely, the research team will explore a deep connection between regularity lemmas in mathematics, hard core theorems in complexity theory, and boosting in machine learning. At the core of it is the following problem. Given a distribution D on some kind of data points, and a class of hypotheses H, find a model M (namely, a distribution on the same underlying set), that is (i) indistinguishable from D with respect to the hypotheses in H; (ii) simple, in that it has a definition in terms of a small number of tests in H; (iii) has as high entropy as possible given the conditions above. With this goal in mind, this proposal will address the following main questions: 1. When do such simple, high entropy models exist? In other words, what does it say about natural phenomena that they have simple, accurate models? What does it say about the questions being asked? This characterization will be studied both in terms of the structure of the distribution D and the class H. 2. What are the quantitative tradeoffs between the different criteria above (indistinguishability, simplicity, and entropy)? 3. When such models exist, can they be found efficiently? What algorithms would lead to the discovery of such models?

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-07-01
Budget End
2022-06-30
Support Year
Fiscal Year
2019
Total Cost
$499,999
Indirect Cost
Name
University of California San Diego
Department
Type
DUNS #
City
La Jolla
State
CA
Country
United States
Zip Code
92093