Many complex large-scale symbolic systems have been developed by human society, such as natural languages, logic, mathematics, which can encode very complicated information. While the major application and motivation for symbolic systems is communication among different entities either horizontally/spatially (e.g., a speaker gives a presentation in a meeting) and/or vertically/temporarily (e.g., reading a history book), information represented by these symbolic systems are ultimately created, revised, and processed/computed by human brain, a large volume of neural network processing information at the sub-symbolic level. What is the relationship and connection between symbolic processing and sub-symbolic processing? What is the internal structure and mechanism at the sub-symbolic level that supports symbol-level processing? Is there any deep computation mechanism for symbolic systems beyond shallow techniques (e.g., string match in Natural Language Processing)? All of these questions are fundamental to multiple research fields and scientific disciplines, and have attracted researchers and scientists of many generations ranging from the early study of denotation and connotation in philosophy to more recent investigation of semantic space construction. This project will focus on modeling and representation of denotation information for words in a natural language. With the fundamental focus on understanding of semantics at the sub-symbolic level, this project will provide valuable insight to natural languages and human intelligence in general, pave the way to build a large-scale testbed for fields such as computational linguistics, psychology, language acquisition, and bring broad interdisciplinary impact on many scientific fields. This project includes a carefully-crafted education component, which directly promotes undergraduate and graduate research and training, encourages minority and woman participation, and has a sustainable impact on Computer Science curricula and courseware beyond the scope of this project.

The overall goal of this project is to investigate how to represent a word with an internal structure (e.g., a neural network) beyond the existing approach of vector space to support more sophisticated symbolic processing techniques beyond shallow string matching. Specifically, there are three research objectives in this project. The first objective is to study the various options for representing internal structures of a word, which is closely related to the active research field of Neural Architecture Search. In the second objective, due to the large vocabulary size in a natural language and complex connotation information for modeling, a huge number of parameters in these neural architectures need to be learned and tuned, a bootstrapping approach will be developed to overcome the problem of data sparsity that challenges many deep learning models. With an unsupervised approach, the third objective of this project is to investigate a viable way for large-scale knowledge acquisition, which is generally recognized as a serious barrier for building real-world Artificial Intelligence systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-07-01
Budget End
2021-06-30
Support Year
Fiscal Year
2019
Total Cost
$104,000
Indirect Cost
Name
University of Massachusetts Boston
Department
Type
DUNS #
City
Dorchester
State
MA
Country
United States
Zip Code
02125