The goal of the project is to reveal how normal human infants, during the second six months of postnatal development, acquire the sound- structures that will become words in their native language. This process of early world learning, which occurs before infants begin to produce words in their speech, must involve the segmentation of stretches of fluent adult speech that correspond to words. Infants presumably use both acoustic cues, such as pauses and prosody (e.g., pitch and stress), and distributional cues, such as the statistical patterning of sequences of sounds, to solve the word-segmentation task. Using a preferential listening technique, preceded by a familiarization phase, 8-month-olds will be tested for their ability to segment multi-syllabic word-like units from artificial language corpora. These corpora will be brief (2-4 minutes) and will be created by a speech synthesizer to control for the presence (or absence) of acoustic cues to word boundaries. The proposed experiments will examine the relative importance of acoustic and statistical cues to work boundaries, the temporal ordering of statistical cues, the limitations on which statistical cues can be used, and the robustness of statistical cues in long-term memory. These studies on word segmentation, therefore, will not only provide important information about a fundamental aspect of early language acquisition, but they will also serve as a model system for the examination of other aspects of statistical language learning.
Showing the most recent 10 out of 71 publications