The human brain displays astonishing adaptation to novel types of sensory information. An example of such adaptation is deaf-blind individuals who learned to perceive spoken language through their sense of touch, by placing a hand on the face and throat of someone producing speech. This example tells us that the somatosensory system can carry out speech perception, which is normally thought to be in the domain of hearing. Drs. Maximilian Riesenhuber of Georgetown University and Lynne E. Bernstein of George Washington University along with their multidisciplinary team will use advanced functional magnetic resonance brain imaging (fMRI) and electroencephalography (EEG) to investigate the neural mechanisms underlying the learning of artificial categories and speech categories by the somatosensory system. In their research they are using a novel transducer to present high-dimensional stimuli to the forearm of participants who are trained on artificial or speech categories. The team is addressing whether perceptual learning of artificial categories of somatosensory patterns follows principles known to govern auditory and visual category learning. For their second aim, the researchers are training participants to recognize spoken words that are transformed into patterns of vibration. The speech stimuli are designed to address questions about cross-sensory learning and the linking of speech categories across hearing and vision. Before and following training, fMRI and EEG measures are being applied to determine where and when in the brain newly learned categories are represented. This project is pushing the frontiers of knowledge about the brain's plasticity for learning novel somatosensory categories, including showing for the first time the neural bases for speech learning through the sense of touch.
Understanding the general principles of sensory processing in the brain, and in particular the commonalities and differences in the underlying neural mechanisms across sensory modalities, is of great interest for practical applications such as the design of neuroprostheses for hearing and/or vision disorders. For example, patients who have auditory or visual sensory system damage may benefit from devices that substitute vibrotactile stimuli for information no longer available through their damaged sensory systems. Vibrotactile stimuli can be combined with visual or auditory stimuli to improve speech perception in noisy situations such as the cockpit of a plane. The fMRI and EEG data from this project along with detailed records kept during training of participants will be made available to the research community. The brain measures obtained before and after training will be valuable for cost-effective testing of new hypotheses about brain plasticity and learning. Research results will be broadly disseminated through publications and conference presentations. The research project will also be leveraged extensively to train the next generation of scientists, at the graduate and undergraduate level, with a particular focus on underrepresented minorities.