This project aims to discover how languages are learned and understood at the levels of both word and sentence. The first problem we address is understanding how learners find out what words such as "dog" and "cat" signify. The second problem is learning how these words combine semantically in sentences such as "The dog bites the cat" versus "The cat bites the dog." Though the words in this case are all the same ones, in English their ordering (more precisely, the structure that binds them) determines their semantic roles (do-er or done-to) with respect to the "biting" act. Some languages rarely use this ordering method and if they do their orders may be different, so learners of English have to acquire these properties by analyzing the speech that they hear from adults to derive the English facts. In most cases children don't get explicit instruction about most of the word meanings or about the syntax, but they learn even so. The learning process applies not only to children but also to learning of second languages by older individuals, including adults. Much of the work proposed herein uses a relatively new experimental technique, developed in our laboratories under earlier funding of this grant, in which children's eye gaze is tracked as they hear spoken descriptions of the surrounding visual world. Specifically, children hear instructions that require them to make an implicit choice about the intended structural organization of ambiguous utterances such as "I saw the man with a telescope." By manipulating potentially informative cues to the intended structure (e.g., verb information, prosodic (tune) information and situational/discourse cues), children's eye gaze and other behaviors can reveal their sensitivity to and representation of these information sources. In the upcoming funding period, we propose: (a) To expand and test our developmental account of how children learn to recover the grammatical properties of a sentence as it is heard, by examining eye gaze responses to ambiguous sentences at different ages;(b) To explore how multiple linguistic and non-linguistic cues regarding speaker's intentions are used by the child to uncover word and sentence meaning;(c) To examine what is tracked by the child regarding the meaning of verbs and other relational lexical items as they hear them;(d) To examine how sentences understanding procedures are learned and used in languages that are quite different from English in the clues to meaning that they offer (specifically, Korean, Tagalog, and Kannada, and perhaps two others). The potential applications of these findings to education are significant, as vocabulary and sentence understanding skills are fundamental to successful functioning in the technological culture of the 21st century, and many children are in need of enhancing and remedial intervention. In addition, as the United States citizenry becomes progressively more multilingual, and is increasingly drawn into global interactions, the ability to acquire second, and even third and fourth, languages becomes an ever more precious social and economic commodity

Public Health Relevance

This project is designed to further the understanding of how young children learn what the words in their language mean, and how these words are combined to make meaningful sentences. The ability to understand spoken and written language rapidly and close to errorlessly is a basic requirement for economic and social well-being in 21st Century American life. The findings are expected to be relevant to second language learning as well;multilingualism is an increasingly precious commodity for Americans as they interact more and more with speakers of different languages both with the country and with cultures around the world.

Agency
National Institute of Health (NIH)
Institute
Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD)
Type
Research Project (R01)
Project #
5R01HD037507-14
Application #
8387788
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Miller, Brett
Project Start
1999-01-01
Project End
2013-09-20
Budget Start
2012-12-01
Budget End
2013-09-20
Support Year
14
Fiscal Year
2013
Total Cost
$262,292
Indirect Cost
$93,856
Name
University of Pennsylvania
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
042250712
City
Philadelphia
State
PA
Country
United States
Zip Code
19104
Cartmill, Erica A; Armstrong 3rd, Benjamin F; Gleitman, Lila R et al. (2013) Quality of early parent input predicts child vocabulary 3 years later. Proc Natl Acad Sci U S A 110:11278-83
Trueswell, John C; Medina, Tamara Nicol; Hafri, Alon et al. (2013) Propose but verify: fast mapping meets cross-situational word learning. Cogn Psychol 66:126-56
Medina, Tamara Nicol; Snedeker, Jesse; Trueswell, John C et al. (2011) How words can and cannot be learned by observation. Proc Natl Acad Sci U S A 108:9014-9
Li, Peggy; Abarbanell, Linda; Gleitman, Lila et al. (2011) Spatial reasoning in Tenejapan Mayans. Cognition 120:33-53
Choi, Youngon; Trueswell, John C (2010) Children's (in)ability to recover from garden paths in a verb-final language: evidence for developing control in sentence processing. J Exp Child Psychol 106:41-61
Novick, Jared M; Kan, Irene P; Trueswell, John C et al. (2009) A case for conflict across multiple domains: memory and language impairments following damage to ventrolateral prefrontal cortex. Cogn Neuropsychol 26:527-67
Papafragou, Anna; Hulbert, Justin; Trueswell, John (2008) Does language guide event perception? Evidence from eye movements. Cognition 108:155-84
Novick, Jared M; Thompson-Schill, Sharon L; Trueswell, John C (2008) Putting lexical constraints in context into the visual-world paradigm. Cognition 107:850-903
Snedeker, Jesse (2008) Effects of prosodic and lexical constraints on parsing in young children (and adults). J Mem Lang 58:574-608
Papafragou, Anna; Cassidy, Kimberly; Gleitman, Lila (2007) When we think about thinking: the acquisition of belief verbs. Cognition 105:125-65

Showing the most recent 10 out of 19 publications