Our laboratory studies the relationship between what is observed in functional neuroimaging studies and the underlying neural dynamics. To do this, we use large-scale computer models of neuronal dynamics that perform either a visual or auditory object-matching task similar to those designed for PET/fMRI/MEG studies. A review of both models can be found in Horwitz & Husain (2007). We also develop computational methods for fMRI and MEG data that allow us to investigate functional brain networks in normal human subjects and in patients with sensory and cognitive processing disorders. However, in typical MEG functional connectivity analyses, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others, but it is largely unknown if some are more suited than others for various types of investigations. In a collaboration with NIMH, we (Ard et al., 2015) investigated seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. We assessed the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas and found that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere, although it may not always be the best measure to detect functional connectivity. Our laboratory also has been interested in elucidating the neural basis of speech production and its disorders. Neuroimaging research indicates that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. To examine this, graph theoretical analysis of functional fMRI data acquired in healthy subjects was used to quantify the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks (Fuertinger et al., 2015). We identified a segregated network of highly connected local neural communities (hubs) in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively forged the formation of the functional speech connectome. In addition, the observed capacity of the primary sensorimotor cortex to exhibit operational heterogeneity challenged the established concept of unimodality of this region. Furthermore, we performed neuroimaging experiments to understand the neural basis of some aspects of language processing in bilingual speakers (Coderre et al., in press). The need to control multiple languages is thought to require domain-general executive control in bilinguals such that the executive control and language systems become interdependent. However, there has been no systematic investigation into how and where executive control and language processes overlap in the bilingual brain. If the concurrent recruitment of executive control during bilingual language processing is domain-general and extends to non-linguistic control, we hypothesized that regions commonly involved in language processing, linguistic control, and non-linguistic control may be selectively altered in bilinguals compared to monolinguals. A conjunction of fMRI data from a flanker task with linguistic and non-linguistic distractors and a semantic categorization task showed functional overlap in the left inferior frontal gyrus (LIFG) in bilinguals, whereas no overlap occurred in monolinguals. This research therefore identifies a neural locus of functional overlap of language and executive control in the bilingual brain. Although the neural substrates of auditory word recognition have been a topic of inquiry since the heyday of classical neurology, they remain poorly understood. The preponderance of evidence from human functional imaging studies now sites auditory word recognition in anterior superior temporal gyrus (STG). A member of our laboratory discusses how some recent lesion studies provide compelling evidence for the causal involvement of anterior STG in auditory single-word comprehension (DeWitt and Rauschecker, in press). We also have examined how the brain processes complex sounds, specifically harmonics. Many speech sounds and animal vocalizations contain components consisting of a fundamental frequency (F0) and higher harmonics. Animals and humans rapidly detect such specific features of sounds, but the time course of the underlying neural decision processes is largely unknown. Moreover, multiple pathways of information processing are involved in complex auditory processing. However, the intricate functional organization of these pathways is poorly understood. To address this, we (Banerjee, Kikuchi, Mishkin, Rauschecker and Horwitz, submitted) computed neuronal response latencies from simultaneously recorded spike trains and local field potentials (LFPs) along the first two stages of cortical sound processing, primary auditory cortex (A1) and lateral belt (LB), of awake, behaving macaques. Two types of response latencies were measured for spike trains as well as LFPs: 1) onset latency, time-locked to onset of external auditory stimuli, and 2) discrimination latency, the time taken from stimulus onset to neuronal discrimination between different stimulus categories. Trial-by-trial LFP onset latencies always preceded spike onset latencies. In A1, simple sounds, such as pure tones, yielded shorter spike onset latencies compared to complex sounds, such as monkey vocalizations (coos, in which F0 was matched to a corresponding pure-tone stimulus). This trend was reversed in LB, indicating a hierarchical functional organization of auditory cortex in the macaque. LFP discrimination latencies in A1 were always shorter than those in LB reflecting the serial arrival of stimulus-specific information in these areas. Thus, chronometry on spike-LFP signals revealed some of the effective neural circuitry underlying complex sound discrimination.

Project Start
Project End
Budget Start
Budget End
Support Year
17
Fiscal Year
2015
Total Cost
Indirect Cost
Name
Deafness & Other Communication Disorders
Department
Type
DUNS #
City
State
Country
Zip Code
Banerjee, Arpan; Kikuchi, Yukiko; Mishkin, Mortimer et al. (2018) Chronometry on Spike-LFP Responses Reveals the Functional Neural Circuitry of Early Auditory Cortex Underlying Sound Processing and Discrimination. eNeuro 5:
Corbitt, Paul T; Ulloa, Antonio; Horwitz, Barry (2018) Simulating laminar neuroimaging data for a visual delayed match-to-sample task. Neuroimage 173:199-222
Liu, Qin; Ulloa, Antonio; Horwitz, Barry (2017) Using a Large-scale Neural Model of Cortical Object Processing to Investigate the Neural Substrate for Managing Multiple Items in Short-term Memory. J Cogn Neurosci :1-17
Xu, Benjamin; Sandrini, Marco; Wang, Wen-Tung et al. (2016) PreSMA stimulation changes task-free functional connectivity in the fronto-basal-ganglia that correlates with response inhibition efficiency. Hum Brain Mapp 37:3236-49
Ulloa, Antonio; Horwitz, Barry (2016) Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex. Front Neuroinform 10:32
Ard, Tyler; Carver, Frederick W; Holroyd, Tom et al. (2015) Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics. Brain Connect 5:336-48
Fuertinger, Stefan; Horwitz, Barry; Simonyan, Kristina (2015) The Functional Connectome of Speech Control. PLoS Biol 13:e1002209
Horwitz, Barry (2014) The elusive concept of brain network. Comment on ""Understanding brain networks and brain organization"" by Luiz Pessoa. Phys Life Rev 11:448-51
Simonyan, Kristina; Herscovitch, Peter; Horwitz, Barry (2013) Speech-induced striatal dopamine release is left lateralized and coupled to functional striatal circuits in healthy humans: a combined PET, fMRI and DTI study. Neuroimage 70:21-32
Horwitz, Barry; Hwang, Chuhern; Alstott, Jeff (2013) Interpreting the effects of altered brain anatomical connectivity on fMRI functional connectivity: a role for computational neural modeling. Front Hum Neurosci 7:649

Showing the most recent 10 out of 41 publications