Eye movement monitoring is a powerful new tool for studying sentence planning and production. However, questions about the function of eye movements made before and during speech need to be answered to validly interpret current and future findings. Previous experiments that combined language production with eye movement monitoring suggest that when extemporaneously describing pictured events, speakers rapidly comprehend the nature of the event and identify the objects involved in it before selecting one object to be encoded as grammatical subject (Griffin & Brock, 1999). Speakers then spend the majority of the 1s preceding the onset of each noun fixating the object to which it refers. The proposed experiments further test the degree of event comprehension before grammatical encoding by slowing down the extraction of visual information and examining the consequences for speech and eye movement patterns. Results are expected to show delays in speech onset and syntactic planning due to speakers' efforts to comprehend events before proceeding rather than taking an opportunistic and incremental approach to syntactic planning. Eye movements provide insight into syntactic planning by indication when, prior to speech, grammatical subjects and objects begin to be fixated differently. The proposed experiments also address the important question of why even movements over mentioned objects occur before speech even though sufficient event comprehension and object recognition have taken place beforehand to allow selection of a grammatical subject. The impact of object degradation on eye movements over objects immediately before mentioning them will indicate whether the time locking of eye movements and word processing is due to the eye taking in necessary visual information or simply following the movements and word processing is due to the eye taking in necessary visual information or simply following the visual objects that correspond to the speaker's internal allocation of attention. If speech-timed eye movements are taking in information required for word processing, object degradation should lengthen the time spent gazing at an object immediately before naming it. Furthermore, this lengthening should occur regardless of the grammatical role of the degraded object's noun. Thus, the proposed experiments complement prior studies examining the scope of conceptual understanding prior to syntactic planning and tracing the time course for lexically encoding nouns. In addition, the experiments probe the function of the eye movements made before and during speech. The results are important not only for understanding normal sentence planning and production, but also the way visual attention is deployed to meet the needs of the observer.

Agency
National Institute of Health (NIH)
Institute
National Institute of Mental Health (NIMH)
Type
Small Research Grants (R03)
Project #
1R03MH061318-01
Application #
6086081
Study Section
Special Emphasis Panel (SRCM)
Program Officer
Kurtzman, Howard S
Project Start
2000-04-01
Project End
2001-08-31
Budget Start
2000-04-01
Budget End
2001-08-31
Support Year
1
Fiscal Year
2000
Total Cost
$36,280
Indirect Cost
Name
Stanford University
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
009214214
City
Stanford
State
CA
Country
United States
Zip Code
94305
Arnold, Jennifer; Griffin, Zenzi M (2007) The effect of additional characters on choice of referring expression: Everyone counts. J Mem Lang 56:521-536
Griffin, Zenzi M; Oppenheimer, Daniel M (2006) Speakers gaze at objects while preparing intentionally inaccurate labels for them. J Exp Psychol Learn Mem Cogn 32:943-8
Griffin, Zenzi M (2004) The eyes are right when the mouth is wrong. Psychol Sci 15:814-21
Griffin, Zenzi M (2003) A reversed word length effect in coordinating the preparation and articulation of words in speaking. Psychon Bull Rev 10:603-9
Ferreira, Victor S; Griffin, Zenzi M (2003) Phonological influences on lexical (mis)selection. Psychol Sci 14:86-90
Griffin, Z M (2001) Gaze durations during speech reflect word selection and phonological encoding. Cognition 82:B1-14