This project will test the feasibility of a computerized speech perception assessment and training system (SPATS-AR) as a novel aphasia rehabilitation option. SPATS-AR will be developed by modifying existing software that has been found to be successful in improving speech recognition by hearing-impaired listeners (SPATS-HI) and by adult learners of English (SPATS-ESL). The modifications needed to adapt the system for aphasia rehabilitation will be identified and the potential of the modified system will be evaluated. The project has a special focus on improving speech-perception capacities. This has the promise of improving language comprehension skills more broadly as speech-perception impairments appear to have a cascading effect on higher levels of language, negatively impacting comprehension of words and sentences. SPATS provides objective, automated training in the identification of the building blocks of spoken English, syllable constituents, and in the recognition of naturally spoken meaningful, sentences. Training in identifying the constituents of syllables (onsets, nuclei, and codas) is designed to give clients necessary skills for attacking difficult words and monitoring their own speech. These sounds have been ranked in their importance based on their lexical and textual frequencies of occurrence. Three special features have proven effective in training: (a) the graduated introduction of items based the item's importance and the client's performance;(b) Adaptive Item Selection (AIS), a proprietary method of selecting items for training;and, (c) the """"""""post-response rehearing"""""""" option that allows the client to compare the target sound with the one mistaken for it. Interleaved with constituent training is a unique sentence testing and training system, which gives practice in top-down cognitive (e.g., working memory) and bottom-up linguistic skills. Importantly, in aphasia, comprehension problems have been found to be related to not only linguistic processing difficulties (at sublexical, lexical, and/or syntactic levels), but also to concomitant cognitive deficits (e.g., attention, working memory). Thus, SPATS appears well suited to target the listening, reading, and cognitive skills that are frequently compromised in aphasia. In Phase I, participants with mild to moderate aphasia will be recruited from the Indiana University Speech and Hearing Clinic. It is anticipated that the syllable constituent tasks will need to be greatly simplified by the use of smaller sets of sounds. The sentence task will need to be modified by adding new sentences with simpler syntactic constructions, increasing the signal-to-noise ratios, and simplifying the display of target and foil words. The strategy will be to begin with stimuli and procedures similar to those now used with ESL and HI populations and then alter the procedures as dictated by the performance of aphasic clients. It is anticipated that SPATS-AR, as finalized in Phase II, will be a valuable rehabilitative option not only because of its content but also because of its telerehabilitation potential for at-home practice.
This project will test the feasibility of a computerized Speech Perception Assessment and Training System (SPATS) as a novel aphasia treatment protocol. SPATS has proven effective in training both hearing- impaired listeners and learners of English as a second language to improve their ability both to identify English speech sounds and to recognize naturally spoken, meaningful sentences. A version of SPATS specifically designed to meet the needs of individuals with aphasia has the potential to improve these individuals'word recognition, comprehension, and cognitive skills and to offer a telerehabilitation option that may increase the efficiency of therapy, foster at-home practice, and bring treatment activities to those unable to regularly attend speech-language therapy sessions.