Neuroscience is producing increasingly complex data sets, including measures and manipulations of sub- cellular, cellular, and multi-cellular mechanisms operating over multiple timescales and in the context of different behaviors and task conditions. These data sets pose several fundamental challenges. First, for a given data set, what are the relevant spatial, temporal, and computational scales in which the underlying information-processing dynamics are best understood? Second, what are the best ways to design and select models to account for these dynamics, given the inevitably limited, noisy, and uneven spatial and temporal sampling used to collect the data? Third, what can increasingly complex data sets, collected under increasingly complex conditions, tells us about how the brain itself processes complex information? The goal of this project is to develop and disseminate new, theoretically grounded methods to help researchers to overcome these challenges. Our primary hypothesis is that resolving, modeling, and interpreting relevant information- processing dynamics from complex data sets depends critically on approaches that are built upon understanding the notion of complexity itself. A key insight driving this proposal is that definitions of complexity that come from different fields, and often with different interpretations, in fact have a common mathematical foundation. This common foundation implies that different approaches, from direct analyses of empirical data to model fitting, can extract statistical features related to computational complexity that can be compared directly to each other and interpreted in the context of ideal-observer benchmarks. Starting with this idea, we will pursue three specific aims: 1) establish a common theoretical foundation for analyzing both data and model complexity; 2) develop practical, complexity-based tools for data analysis and model selection; and 3) establish the usefulness of complexity-based metrics for understanding how the brain processes complex information. Together, these Aims provide new theoretical and practical tools for understanding how the brain integrates information across large temporal and spatial scales, using formal, universal definitions of complexity to facilitate the analysis and interpretation of complex neural and behavioral data sets.

Public Health Relevance

The proposed work will establish new, theoretically grounded computational tools to help neuroscience researchers design and analyze studies of brain function. These tools, which will be made widely available to the neuroscience research community, will help support a broad range of studies of the brain, enhance scientific discovery, and promote rigor and reproducibility.

Agency
National Institute of Health (NIH)
Institute
National Institute of Biomedical Imaging and Bioengineering (NIBIB)
Type
Research Project (R01)
Project #
5R01EB026945-02
Application #
9789280
Study Section
Special Emphasis Panel (ZEB1)
Program Officer
Peng, Grace
Project Start
2018-09-20
Project End
2021-06-30
Budget Start
2019-07-01
Budget End
2020-06-30
Support Year
2
Fiscal Year
2019
Total Cost
Indirect Cost
Name
University of Pennsylvania
Department
Neurosciences
Type
Schools of Medicine
DUNS #
042250712
City
Philadelphia
State
PA
Country
United States
Zip Code
19104