While machine learning and artificial intelligence has greatly advanced in recent years, these systems still have significant limitations. Machine learning systems have distinct learning and deployment phases. If new information is acquired, the entire system is often rebuilt rather than having only the new information being learned because otherwise the system will forget a large amount of its past knowledge. Systems cannot learn autonomously and often require strong supervision. This project aims to address these issues by creating new multi-modal brain-inspired algorithms capable of learning immediately without excess forgetting. These algorithms can enable learning with fewer computational resources, which can facilitate learning on devices such as cell phones and home robots. Fast learning from multimodal data streams is critical to enabling natural interactions with artificial agents. Autonomous multimodal learning will reduce reliance on annotated data, which is a huge bottleneck in increasing the utility of artificial intelligence, and may enable significant gains in performance. This research will provide building blocks that others can use to create new algorithms, applications, and cognitive technologies.

The algorithms are based on the complementary learning systems theory for how the human brain learns quickly. The human brain uses its hippocampus to immediately learn new information and then this information is transferred to the neocortex during sleep. Based on this theory, streaming learning algorithms for deep neural networks will be created, which will enable fast learning from structured data streams without catastrophic forgetting of past knowledge. The algorithms will be assessed based on their ability to classify large image databases containing thousands of categories. These systems will be leveraged to pioneer multimodal streaming learning for visual question answering and visual query detection, enabling language to inform understanding of visual scenes. These traits will be integrated to enable a model to autonomously query an environment with limited human supervision.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-10-01
Budget End
2022-09-30
Support Year
Fiscal Year
2019
Total Cost
$499,960
Indirect Cost
Name
Rochester Institute of Tech
Department
Type
DUNS #
City
Rochester
State
NY
Country
United States
Zip Code
14623