A fundamental question facing researchers in linguistics and cognitive science is whether children's ability to learn language is due to specialized cognitive modules or to more general learning capacities. This research tests a strong version of the hypothesis that specialized cognitive modules for language learning exist.
Specifically, the goal of this research is to find evidence for or against the claim that a specialized module is present for learning how sounds pattern in languages. This claim is grounded by computational considerations, which reveal that the way languages combine speech sounds to make words (phonology) is fundamentally different from the way languages combine words to make sentences (syntax).
The PIs exploit this computational understanding to develop a series of artificial language learning experiments that investigate how specialized the module for learning phonology is in two ways. First, the learnability of an attested sound pattern is compared with the learnability of a minimally different, but unattested, sound pattern. Theoretically, the only known difference between these two patterns is their computational complexity. A second series of experiments examines the learnability of patterns only found in the syntax, but not the phonology, of natural languages. A distinguishing feature of both series of experiments is that the specific patterns to be tested are well understood both in theoretical computer science and in theoretical linguistics.
The results of this research will lead to a better understanding of the psychological reality of the computational boundaries investigated here. This in turn will provide a deeper understanding of what constitutes a possible phonological pattern, how such patterns are learned, and first language acquisition in general. The results also directly address the extent to which human language learning is domain-specific or domain-general. Studies of language processing and clinical research into speech and language disorders may also benefit from this investigation.
The main goal of this research is to find evidence for or against the claim that humans employ a distinct learning mechanism for phonology. The learnability of logically possible phonotactic patterns was examined in experimental settings using the artificial language learning paradigm. The findings of these experiments support the idea that there is a distinct learning mechanism for phonology (as opposed to syntax) as the phonological learning mechanism is subject to computational constraints that do not appear to be found in syntactic and visual pattern learning. Three series of experiments were conducted. The first series of experiments were designed to test the learnability of a pattern found in natural language syntax, but not in the phonology of any human language. The pattern was realized over sentences (syntactic context) and over words (phonotactic context), and the results show that human subjects were only able to learn this pattern if it was presented in syntactic context. These results support the idea that the learning mechanism for phonology is different from the one for learning syntax, and the absence of this pattern in phonology is due to the computational restrictions of a phonological learner. The second series of experiments were designed to examine and compare the learnability of a particular phonologically plausible sound pattern which is not found in any natural languages and its counterpart which is found in natural languages. The results from this experiment show that the unattested phonotactic pattern was more challenging to learn than the attested phonotactic pattern. These results suggest that the phonological learning mechanism is subject to even stricter computational constraints than those observed in the first series of experiments. The third series of experiments were carried out to investigate whether the same learning restrictions revealed by the results of the second set of experiments also apply to the non-linguistic domains. The same pattern tested in the second series of experiments were embedded in sequences of shapes (visual context), and in sequences of drumbeats (auditory context). The results show that the computational constraints of a phonological learning mechanism were also observed in the non-linguistic auditory learner, but not in the visual pattern learner. This could imply that the phonological learner and the non-linguistic auditory share some but not necessarily all properties. The specific patterns tested are well-understood and well-motivated from the perspective of theoretical linguistics and theoretical computer science. The results from these experiments provide insights into the properties of human’s phonological learning mechanisms and advance the debate between domain-specific and domain-general mechanisms for human learning. Intellectual Merit. This study was among the first experimental studies to test the hypotheses of whether there is a distinct learning module for phonology that is highly constrained with computationally well-defined patterns. The results of this study has shed light on what constitutes a possible phonological pattern and how such patterns can be learned. More specifically, the data obtained suggest that computational properties play a more important role in the typology of phonotactic patterns than it was previously assumed. These results inform research in a number of fields including theoretical, computational and experimental linguistics, especially in the design of a comparative paradigm for artificial language learning. Broader Implications. The implications of the results of this research are beyond the field of phonology and linguistics. The results help us to understand the limitations, capabilities and infrastructure of human cognition in general. In particular, the results obtained from the non-linguistic experiments which were designed to investigate the learnability of some particular patterns in the visual and auditory domains have revealed that the subregular restrictions observed in phonotactic learning was not seen in the visual domain. The same restriction however, was instantiated differently in the non-linguistic auditory domain.