Learning to speak a new language in adulthood has become a common occurrence in our increasingly interconnected world. It is often very hard for learners to perceive and produce the sounds in the new language when they are different from the sounds in the native language. Most adult learners will never speak like a native, instead retaining a non-native accent. New evidence suggests that sleep may have a special role to play in solidifying learning of non-native sounds. Moreover, the timing of learning with respect to sleep may be a crucial factor. This project assesses the relationship between sleep and non-native speech sound learning 1) through analysis of a large database from an industry partner specializing in language learning software and 2) by imaging and analyzing the neural systems that underlie successful learning. A greater understanding of the optimal conditions for learning that integrates knowledge from perception, sleep, and neural processing will suggest strategies for language instruction that best help learners acquire the sounds of a new language. The research program showcases an academic/industry partnership and sharing of online tools that will be developed for perceptual testing. In addition, outreach to middle-school students will help introduce these questions and experimental methods into STEM education.

The overarching goal of this project is to build a model of non-native speech sound learning, the Sleep Consolidation Model for Speech (SCMS), that integrates research from literatures on sleep, perception, and neurobiology to discover facilitating and constraining conditions on speech sound acquisition. Three linked projects test the idea that consolidation during sleep serves to protect learned speech sound information from interference and also allows learners to generalize learning to new speech sound categories. Project 1 tests predictions of the SCMS using web-based training and sleep monitoring in a standard laboratory (college student) sample. Project 2 involves the analysis of a large database from users of the Rosetta Stone language learning software, enabling the research team to extend these predictions to a more diverse and ecologically valid subject sample and new target languages (e.g. Irish, Arabic). Project 3 evaluates the neural predictions of the SCMS using magnetic resonance imaging (MRI) to assess the relationship between brain structure and learning, and transcranial magnetic stimulation (TMS) to search for the neural locus of interference effects. The results should help maximize conditions for learning, for example, by scheduling training in relation to sleep or by minimizing interfering stimuli and tasks between training and sleep. Results may also inform neural models of speech and learning, by elucidating the role of white matter connectivity in non-native speech learning. Although questions are couched in terms of a specific perceptual problem, learning a new language, the research also has implications for perceptual learning in other domains (e.g. novel visual category acquisition).

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
1554810
Program Officer
Betty Tuller
Project Start
Project End
Budget Start
2016-06-01
Budget End
2021-05-31
Support Year
Fiscal Year
2015
Total Cost
$433,995
Indirect Cost
Name
University of Connecticut
Department
Type
DUNS #
City
Storrs
State
CT
Country
United States
Zip Code
06269