This collaborative work was started approximately a year and a half ago and to date approximately 14 subjects were studied and two abstracts were submitted and presented. These studies are based on ongoing NIH funded research in the PI's (Dr. Bellugi) laboratory on the biological foundations of human language approached through the study of American Sign Language (ASL). ASL displays all of the complex linguistic structure of spoken languages, but encodes that information spatially. Thus, ASL allows one to dissociate modality dependent from modality independent contributions to the neural organization for language. Space is used in a multifunctional way in ASL to encode (1) syntactic relations among individual signs with a sentence, (2) discourse relations among the players in a discourse across sentences, and (3) spatial relations themselves in the context of describing spatial layouts. These distinct functions of space are layered one upon another in sign, and yet the brain organization reflects these functional distinctions. ASL's extensive reliance on spatial contrasts in the encoding of linguistic structure would suggest a greater right hemisphere involvement; however, there is strong evidence from lesion studies that ASL is processed predominantly in the left cerebral hemisphere, and is to a large extent independent of non-linguistic spatial cognition. Our initial focus is to examine the issue of hemispheric specialization for language in deaf subject. We have already accumulated a large body of fMRI data. The studies are conducted in multislice covering the whole brain, so that information is available on all areas of the brain activated during ASL based paradigms in native signers. Four tasks have so far been examined : (a) covert signing of objects: subjects were asked to imagine the ASL signs for objects displayed on the screen in succession every 2 second (bee, flower, apple, car etc.; objects that would be spelled using the English-based manual alphabet rather than represented were avoided); (b) overt signing of objects with one hand: same as the covert task but subjects produced the signs with the hand adjacent to their leg to minimize motion (this is not an unnatural task and this form of signing is employed in situations requiring some degree of privacy; it is commonly referred to as """"""""whispering""""""""); (c) covert sign generation: subjects were shown a hand shape every 5 seconds and were asked to think of as many ASL signs as they could that contained that handshape; (d) reproducing """"""""nonsense"""""""" hand-shapes; hand shapes that were chosen so as to have no meaning in ASL were displayed to the subjects in succession on the backprojection screen and subjects were asked to reproduce them. In these preliminary analysis of the data (not all data from all subjects have been examined), extensive and consistent activation was observed during the aforelisted paradigms. Most notably, in both covert language tasks, areas activated included i) areas dorsal to the Sylvian fissure including area 44 (Broca's area), ii) portions of areas 9 and 8, iii) medial wall motor areas (including potions of supplementary motor area (SMA), preSMA, and the cingulate motor areas buried in the cingulate sulcus) iv) lateral motor areas 4 and 6 (despite no actual signing) v) area 7, 40, 42, and 22 of the parietal cortex; vi) a portion of area 24 located anteriorly in the cingulate gyrus. Language related paradigms produced activation predominantly in the left frontal lobe but bilateral activation was present in the occipital/temporal/parietal cortex. Extensive data analysis remains to be performed on existing large body of data. Analogous tasks will be examined in native English speakers for comparison; we have already collected some data on native English speakers but not for all paradigms. New data will be collected based on the results of the initial studies.
Showing the most recent 10 out of 493 publications