Human languages have complex systems for expressing meaning and grammatical information with morphology, the elements which make up words (e.g., rules of prefixing, suffixing, and copying). The project will develop a novel computational model that is both highly interpretable and capable of learning a broad range of morphological rules from examples. The model will be evaluated on its ability to induce morphological systems from large naturalistic data sets containing many rules and exceptions and to match human performance in generalizing morphological rules from small amounts of evidence in controlled experiments. By incorporating insights from linguistics and other areas of cognitive science, the model will provide a bridge between the studies of human and artificial intelligence. In virtue of its modular and transparent design, the model will shed light on the uniquely human capacity to learn and extend linguistic patterns. The project will provide interdisciplinary training opportunities in linguistics, cognitive psychology, artificial intelligence, and data science for students at many levels and from diverse backgrounds.

A large body of research in linguistics has identified the general properties of morphological systems and the restricted ways in which they vary across languages, while recent advances in artificial intelligence have given rise to computational models that can learn morphology with minimal supervision. These two approaches, however, have been developed largely in isolation from one another. The project will develop a novel kind of deep neural network that is understandable and interpretable, in the sense that its representations and operations are continuous versions of the discrete symbolic components of linguistic theory, and that can induce a broad range of morphological realization rules from positive evidence with domain-general learning algorithms (e.g., stochastic gradient descent). The model will be evaluated on large naturalistic data sets that are common in natural language processing and on results from new miniature artificial grammar experiments on infixation, intercalation, and reduplication. The project aims to demonstrate that deep neural networks can make transformative contributions to the study of language structure and acquisition when they have a modular design that mirrors the fundamental representations and operations of symbolic linguistic theory.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

National Science Foundation (NSF)
Division of Behavioral and Cognitive Sciences (BCS)
Standard Grant (Standard)
Application #
Program Officer
Tyler Kendall
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Johns Hopkins University
United States
Zip Code