As computers advance from serving as our tools to becoming our helpers and collaborators, they must be capable of commonsense reasoning--for example, knowing that a person needs to use their hand to open a door (and that this might be a problem if they are carrying groceries). This kind of commonsense reasoning is a longstanding, elusive goal of artificial intelligence, but is becoming within reach today due to the availability of vast amounts of data and more powerful computational models for learning from those data. This project is aimed at a key step in enabling commonsense reasoning by machines: the automatic acquisition of common sense knowledge. The project’s approach builds upon recent breakthroughs in language models that learn by reading large amounts of text, and combines these in novel ways with explicit commonsense knowledge gathered from humans. The project probes new, scalable methods for humans to impart their commonsense knowledge to the system, by using existing dictionaries and encyclopedias, building curricula that help machines build to commonsense mastery step by step, and by directly enforcing key logical constraints (for example, that if one item is bigger than another, then the second item must be smaller than the first). Success in this project could help power new virtual assistants, medical diagnosis and treatment systems, improved search engines, and other important applications of AI. The work also aims to enable the development of better language models themselves---improving current commercial technologies such as speech recognition and machine translation, and ultimately helping to power the next generation of computer systems capable of communicating with people more naturally using language. Along the way, the project will help train the next generation of students about these approaches and technologies, via education and outreach activities.

The technical strategy used in the project involves learning unsupervised neural language models (LMs) to capture textual distributions, and then extracting common sense knowledge from those models. This approach is challenging because common sense knowledge is multifarious and massive, and yet is not often explicitly stated in text. The project aims to overcome this challenge using several methods for scalably incorporating human input in concert with neural language models. First, the project studies how to use explicit lexical knowledge found in dictionaries to improve LMs, extending prior work in modeling the definitions of terms with neural LMs. Next, the project is investigating a “scaffold” of semantic tasks (a task curriculum of increasing complexity) incrementally constructing models for each task in turn in a way that aims to improve the learning of each subsequent task. Third, the project is developing methods for encoding commonsense logical constraints within neural language models. Lastly, because time and energy cost is a potential barrier to the application of the proposed techniques, the project is also studying how to make its approaches efficient. In particular, the project is investigating ways to scale-up LMs to larger corpora while reducing the significant computational and energy cost in LM training, by learning how to automatically identify text that will be more informative for training.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-10-01
Budget End
2023-09-30
Support Year
Fiscal Year
2020
Total Cost
$470,000
Indirect Cost
Name
Northwestern University at Chicago
Department
Type
DUNS #
City
Chicago
State
IL
Country
United States
Zip Code
60611