A program of research is proposed to systemically investigate the perceptual and cognitive processes by which novel speech stimuli (including tactile, Cued Speech, and auditory) are learned, identified, remembered, and integrated. It is hypothesized that lexical processes mediate the relationship between bottom-up perceptual constraints and higher-level cognitive/linguistic processes. In Study 1, Computational Modeling of the Lexicon, psychophysical constraints for a set of tactile, auditory, and visual speech conditions will be estimated using nonsense syllable identifications obtained with a theoretically motivated set of stimuli. Two different tactile vocoders and a fundamental frequency (FO) device will be studied, as well as analogous auditory vocoder and FO signals, and Cued Speech. A computational method will be used to examine effects of phonetic similarity/dissimilarity on the information available to address the lexicon, and a validation experiment will be conducted. In Study 2, Perceptual Learning, perceptual sensitivity following identification training will be investigated in several experiments involving word stimuli with calibrated similarity. A main question is whether word identification training results in increased perceptual sensitivity. In Study 3, Representation in Longterm Memory, the hypothesis will be tested that modality and surface attributes of crossmodal stimuli are preserved in memory. A continuous recognition memory task will be employed in a series of experiments. If the hypothesis is confirmed, an implications is that perceptual encoding does not result simply in an abstract phonological code. In Study 4, Cross- modal Integration with Words, on-line methods will be used to study word recognition. The traditional methods of naming and lexical decision will be used to investigate audiovisual word identification, and results will be compared with those from a new phoneme monitoring task. The phoneme monitoring task will then be applied to investigate difference between audiovisual conditions, and differences between Cued Speech and visual- tactile speech. Subjects in the proposed experiments will be adults with normal hearing and English as a first language, and adults with profound hearing losses who learned Cued Speech during the period of language acquisition. Overall, the proposed research departs from our previous work that adopted pragmatic/engineering/psychophysical methods for developing tactile speech aids. Since it is known that speech perception can be affected by tactile speech and Cued Speech stimuli, we now propose to turn our attention to research on spoken language processing with these novel speech signals. The work will contribute to ameliorating communication problems of individuals with profound hearing losses.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC000695-08
Application #
2518062
Study Section
Sensory Disorders and Language Study Section (CMS)
Project Start
1988-07-01
Project End
2000-08-31
Budget Start
1997-09-01
Budget End
1998-08-31
Support Year
8
Fiscal Year
1997
Total Cost
Indirect Cost
Name
House Ear Institute
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90057
Auer Jr, Edward T; Bernstein, Lynne E (2007) Enhanced visual speech perception in individuals with early-onset hearing impairment. J Speech Lang Hear Res 50:1157-65
Bernstein, L E; Auer Jr, E T; Tucker, P E (2001) Enhanced speechreading in deaf adults: can short-term training/practice close the gap for hearing adults? J Speech Lang Hear Res 44:5-18
Bernstein, L E; Tucker, P E; Auer Jr, E T (1998) Potential perceptual bases for successful use of a vibrotactile speech perception aid. Scand J Psychol 39:181-6
Auer Jr, E T; Bernstein, L E; Coulter, D C (1998) Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals. J Acoust Soc Am 104:2477-89
Demorest, M E; Bernstein, L E (1997) Relationships between subjective ratings and objective measures of performance in speechreading sentences. J Speech Lang Hear Res 40:900-11
Demorest, M E; Bernstein, L E; DeHaven, G P (1996) Generalizability of speechreading performance on nonsense syllables, words, and sentences: subjects with normal hearing. J Speech Hear Res 39:697-713
Bernstein, L E; Demorest, M E; Eberhardt, S P (1994) A computational approach to analyzing sentential speech perception: phoneme-to-phoneme stimulus-response alignment. J Acoust Soc Am 95:3617-22
Demorest, M E; Bernstein, L E (1992) Sources of variability in speechreading sentences: a generalizability analysis. J Speech Hear Res 35:876-91
Bernstein, L E; Demorest, M E; Coulter, D C et al. (1991) Lipreading sentences with vibrotactile vocoders: performance of normal-hearing and hearing-impaired subjects. J Acoust Soc Am 90:2971-84
Eberhardt, S P; Bernstein, L E; Demorest, M E et al. (1990) Speechreading sentences with single-channel vibrotactile presentation of voice fundamental frequency. J Acoust Soc Am 88:1274-85