Where does human language come from? Which aspects of linguistic ability are innate and which are learned from past experience? Which features of language are shaped by 'deep' features of thought and sentence structure, and which are shaped by 'surface' features like what words sound like or how they are articulated? These are the big questions about language, the answers to which promise insights not only into how language is learned and how language disorders should be treated, but into the very nature of this crucial human ability. With NSF funding, the members of the Center for Research in Language at the University of California, San Diego, will bring together the world's leading researchers to address these basic questions in a special session of the 20th Annual CUNY Conference on Human Sentence Processing in La Jolla, California in March, 2007. In this special session, the latest research that compares signed and spoken language acquisition, comprehension, and production will be presented and discussed. Two general differences between signed and spoken language promise unprecedented insights into these questions: First, learners of signed and spoken language can have very different early language experiences. This is because children are sometimes diagnosed with deafness only later in childhood, and because spoken language is overall more prevalent in everyday environments. Comparing signed and spoken language abilities in light of these differences promises insights into the innate versus learned nature of language mechanisms, since signed and spoken languages are created and perceived in starkly different ways: primarily with the hands and eyes for signed language, and primarily with the mouth and ears for spoken. While the 'deep' features of language, those that are organized by patterns of thought and sentence structure, should be similar between signed and spoken language, the features of language that are organized by how language is perceived or articulated should differ between signed and spoken language.
The Annual CUNY Conference on Human Sentence Processing is the most prominent meeting of high-level language-processing researchers in the world. Through NSF support, the conference will include complete sign-language interpretation, opening the meeting to Deaf participants who otherwise are unable to fully partake in broader conference activities. Finally, the CUNY Conference and the special session specifically target the participation of beginning investigators, including starting faculty, post-doctoral researchers and graduate students, and so will enable entry of new generations of researchers, whether hearing, hard-of-hearing or Deaf, into the language sciences.