Quite recently, there has been a good deal of common interest among researchers in machine learning (whether it be algorithmic or neural net or genetic algorithms), in models of biological learning, and in statistics, all of which make inductive estimates of data dependencies. It is still the case, however, that the different disciplines have their own terminology and approaches. It is important to begin providing a common terminology and framework for the different disciplines by investigating inductive principles, which provide general prescriptions for what to do with training data in order to learn a model. There is just a handful of known inductive principles (Regularization, Structural Risk Minimization, Bayesian Inference, Minimum Description Length), but there are many learning methods - constructive implementations based on these principles. This research project will develop an understanding of the differences among inductive principles, how they vary in power, and their use in model selection and the development of learning algorithms, in order to provide more solid foundations for the fields that are using them and better appreciation of the commonalities among the fields.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
9618167
Program Officer
Larry H. Reeker
Project Start
Project End
Budget Start
1997-02-15
Budget End
1998-01-31
Support Year
Fiscal Year
1996
Total Cost
$50,000
Indirect Cost
Name
University of Minnesota Twin Cities
Department
Type
DUNS #
City
Minneapolis
State
MN
Country
United States
Zip Code
55455