This is a research project on artificial intelligence (AI) that focuses on the problem of explainability. More specifically, it will relate deep learning neural networks to broader explanatory frameworks in cognitive science. The PI plans to develop a philosophically and empirically grounded account of the rational faculties that can be modeled by diverse deep learning neural network architectures. A main goal of this research is to articulate a subtler view of deep learning and the diversity of its architectures and uses, which would be made vibrant and accessible to academic scholars and the broader public via frequent comparisons to research on human and animal cognition. Diverse scholars will benefit from the account of the rational capacities of deep neural networks to be developed in this research. The PI will present findings at conferences and workshops in computational neuroscience, psychology, animal cognition, and philosophy; the results are to be published in a monograph suitable for teaching and research in any of these disciplines. The work will also be presented at conferences for policymakers working on emerging technologies with the aim of helping audiences to peek inside the "black box" of deep learning systems and better anticipate the space of possible policy interventions and their effects. It will also be presented in public fora, such as the Brains Blog, the New York Times? The Stone column, 3 AM, and philosophy podcasts such as Philosophy Bites.

This research project is robustly interdisciplinary; the PI will bring together cutting-edge research from computer science, psychology, neuroscience, and philosophy to provide a better understanding of deep learning neural networks. He will begin by analyzing a basic deep learning network template; he will describe the computational problems that plague this template when applied to biologically-relevant problems, and then explore how those limitations can be overcome by adding further biologically-inspired components. These modifications?all of which have already demonstrated their promise in successful models?correspond to biological faculties like reinforcement learning, predictive learning, imagination, attention, episodic memory, executive control, and social cognition. With each component, comparisons will be made to the structure and function the corresponding faculty in animals and humans, the degree of similarity between the model component?s structure and its implementation in biological brains, and the kind or degree of rationality that an architecture deploying those components can exhibit in its decision-making. This strategy allows the exploration of diverse deep learning model architectures (many of which have been little discussed) under a common narrative thread. The results of this project will enable scientists, engineers, and the broader public to understand both the strengths and limits of different deep architectures, and how their capacities relate to those of biological organisms. It is imperative to meet this objective as deep learning continues to be deployed for an increasingly wider range of tasks including image search, facial recognition, driverless automobile navigation, game-playing, medical diagnosis, and many others.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Type
Standard Grant (Standard)
Application #
2020585
Program Officer
Frederick Kronz
Project Start
Project End
Budget Start
2020-09-01
Budget End
2021-08-31
Support Year
Fiscal Year
2020
Total Cost
$139,360
Indirect Cost
Name
University of Houston
Department
Type
DUNS #
City
Houston
State
TX
Country
United States
Zip Code
77204