One thing that makes human language so powerful is that its limited vocabulary can be used to share an unlimited number of ideas, and to say things that have never been said before. Understanding how words combine to express new meanings in a predictable (and therefore unlimited) way has been the focus of decades of research, and has led to increasing success by artificial intelligence systems in responding to direct queries, such as asking for directions, weather, etc. However, the meaning of language in face-to-face interactions is influenced by factors beyond just words, and even beyond intonation: nonlinguistic visual information interacts clearly and regularly with speech. For example, when someone says they "took a note" while moving their fingers as if they were typing on a keyboard, this conveys that the note was electronic; as another example, a pointing gesture toward someone while saying "my neighbor" identifies the person with the label. Technology currently exists to recognize most of these movements, but they are typically ignored for the purpose of meaning since so little is known about how they contribute and compose with other parts of a sentence. A source of valuable insight for this question comes from languages which are entirely visual: sign languages such as American Sign Language have their own full distinct grammar and their own complex rules for the meanings of word combinations, and at the same time clearly and regularly integrate visual demonstrations and pointing into sentence meaning.

This project compares the role of visual (demonstrations and pointing) information in gestures used in English with well-documented counterparts in American Sign Language. It builds on current work by the project team in theoretical models of sign language linguistics and gesture, expanding to include state-of-the-art experimental methodologies and analysis. English and American Sign Language will be studied at each point in parallel, both by gathering quantitative data using responses to videos presented online and by analyzing fluent language productions. A major challenge is that research on the compositional processes that underlie how words combine into more complex meanings requires a dolid background in logic and computation, and scholars with this skill set tend to have limited overlap with the Deaf, Hard of Hearing, and fluent signing scholars who have the most insight into the quickly growing fields of sign linguistics and gesture. To correct for this gap, the project will include training opportunities for postdoctoral, graduate students and undergraduates whose studies are focused on sign languages and gesture, to provide experience with experimental methodology, statistical analysis, and logic and computation. The project will also include the development of a course and textbook dedicated to bridging this gap.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
1844186
Program Officer
Tyler Kendall
Project Start
Project End
Budget Start
2019-07-01
Budget End
2024-06-30
Support Year
Fiscal Year
2018
Total Cost
$388,479
Indirect Cost
Name
Harvard University
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02138