The long-term goals of this translational project are to develop a set of multimodal tests of sentence recognition and to collect normative data from adults and children. Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition (SWR) routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization and lexical discrimination that may contribute to individual variations in SWR performance are not assessed in conventional tests of this kind. The new multimodal sentence tests will be based on paradigms and theoretical models borrowed from the field of Psychological Sciences. The research will encompass five milestones. Milestone 1. Construct two multimodal sentence tests (adult and child versions). Lexical characteristics of the sentences will be carefully controlled following the assumptions of a current model of SWR. Milestone 2. Create audiovisually-recorded, multi-talker versions of the sentence sets. Pilot data will be used to create multi- talker sentence lists that are equivalent within a given presentation format (auditory-only [A-only], visual-only [V-only or auditory plus visual [AV]). Milestone 3. Verify reliability and establish validity of the sentence sets. This will be done in a sample of adults and children with hearing loss. Milestone 4. Collect normative data concerning performance in the A, V, and AV presentation formats. Data obtained from adults and children will be analyzed separately as a function of degree of hearing status (normal, and mild, moderate, severe, or profound hearing loss in the better hearing ear) and type of sensory aid (hearing aid or cochlear implant). Milestone 5. Distribute Test Materials. The test packets will consist of DVDs containing the multimodal sentence lists as well as an instruction booklet, data gathering forms and a manual for data interpretation. These new sentence tests should provide important insights into the SWR differences and enormous variability noted among individuals with hearing loss and lead to better diagnosis, evaluation and assessment paradigms. Information obtained from these new measures should prove useful in selecting sensory aids and in developing intervention programs that are targeted to an individual's specific needs. ? ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC008875-02
Application #
7382482
Study Section
Special Emphasis Panel (ZDC1-SRB-O (19))
Program Officer
Sklare, Dan
Project Start
2007-03-09
Project End
2012-01-31
Budget Start
2008-02-01
Budget End
2009-01-31
Support Year
2
Fiscal Year
2008
Total Cost
$441,116
Indirect Cost
Name
Purdue University
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
072051394
City
West Lafayette
State
IN
Country
United States
Zip Code
47907
Sjoberg, Kristin M; Driscoll, Virginia D; Gfeller, Kate et al. (2017) The impact of electric hearing on children's timbre and pitch perception and talker discrimination. Cochlear Implants Int 18:36-48
Krull, Vidya; Luo, Xin; Iler Kirk, Karen (2012) Talker-identification training using simulations of binaurally combined electric and acoustic hearing: generalization to speech and emotion recognition. J Acoust Soc Am 131:3069-78
Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S et al. (2012) Studies in pediatric hearing loss at the House Research Institute. J Am Acad Audiol 23:412-21
Kirk, Karen Iler; Prusick, Lindsay; French, Brian et al. (2012) Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach. J Am Acad Audiol 23:464-75
Holt, Rachael Frush; Kirk, Karen Iler; Hay-McCutcheon, Marcia (2011) Assessing multimodal spoken word-in-sentence recognition in children with normal hearing and children with cochlear implants. J Speech Lang Hear Res 54:632-57
Krull, Vidya; Choi, Sangsook; Kirk, Karen Iler et al. (2010) Lexical effects on spoken-word recognition in children with normal hearing. Ear Hear 31:102-14
Wang, Nan Mai; Wu, Che-Ming; Kirk, Karen Iler (2010) Lexical effects on spoken word recognition performance among Mandarin-speaking children with normal hearing and cochlear implants. Int J Pediatr Otorhinolaryngol 74:883-90
Kirk, Karen Iler; Hay-McCutcheon, Marcia J; Holt, Rachael Frush et al. (2007) Audiovisual Spoken Word Recognition by Children with Cochlear Implants. Audiol Med 5:250-261