Perception of visible speech (lipreading/speechreading) can be sufficient for acquiring a first language and carrying out everyday communication. Under noisy conditions, individuals with hearing loss may need to rely on lipreading, even if they have sensory aids (hearing aids, cochlear implants). Lipreading demonstrates plasticity in individuals with prelingual profound hearing impairments (deaf), such that accuracy in deaf adults frequently far exceeds accuracy in normal-hearing adults. The long-term research aim of this proposed project is to explain, in terms of sensory and perceptual processes and underlying neural mechanisms, lipreading in deaf experts, in individuals with normal hearing, and in individuals with idiopathic sudden profound hearing loss. The long-term clinical aim is to develop effective tools for lipreading training and assessment. In Study 1, it is hypothesized that expert deaf adult lipreaders are more capable at the level of lower-sensory constraints related to visual spatial frequency processing than are normal-hearing adults. The study applies a method for estimating visual critical spatial frequency bands using contrast thresholds. In Study 2, it is hypothesized that faster and more sensitive perceptual processing of the visual phonetic stimulus is a major component of the expert lipreader's advantage. A speeded discrimination method will be used to study perceptual sensitivity and longterm memory structure for visual phonetic speech. In Study 3, the same hypothesis will be tested, but the stimuli will be generated with a new visual speech synthesizer. In Study 4, the hypothesis is that expert lipreading is associated with phonetic exogenous and possibly neuroplastic cortical effects. Electrophysiological event related potentials (ERPs) will be obtained during a target discrimination task. Prelingually deaf adults who are expert lipreaders will be compared with normal- hearing adults to isolate sources of the expert's lipreading advantage. Patients with sudden hearing loss will be recruited to study perceptual and neuroplastic effects that may occur rapidly following an abrupt change in input stimulation. Relevance to public health: Experiments are designed to lead to more effective lipreading assessment and training, which would be relevant to a wide range of conditions involving hearing loss and degraded perception under noisy conditions. Applications of the visual speech synthesizer include computer-based training systems for speech perception in children with hearing impairments. Knowledge about learning and plasticity in adults is important, as hearing loss is a frequent occurrence associated with aging.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC008583-03
Application #
7661371
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Shekim, Lana O
Project Start
2007-07-15
Project End
2010-03-31
Budget Start
2009-07-01
Budget End
2010-03-31
Support Year
3
Fiscal Year
2009
Total Cost
$80,498
Indirect Cost
Name
House Research Institute
Department
Type
DUNS #
062076989
City
Los Angeles
State
CA
Country
United States
Zip Code
90057
Tjan, Bosco S; Chao, Ewen; Bernstein, Lynne E (2014) A visual or tactile signal makes auditory speech detection more efficient by reducing uncertainty. Eur J Neurosci 39:1323-31
Bernstein, Lynne E; Jiang, Jintao; Pantazis, Dimitrios et al. (2011) Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays. Hum Brain Mapp 32:1660-76
Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng et al. (2010) Sulcal set optimization for cortical surface registration. Neuroimage 50:950-9
Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao et al. (2010) Comparison of landmark-based and automatic methods for cortical surface registration. Neuroimage 49:2479-93