To speak and understand a language (any language) involves mastery of a complex and subtle system of principles, since humans produce and understand sentences and expressions that we have never heard before. The system underlying this complex human ability is completely unconscious. We cannot directly "look inside" our minds or brains to see how this works, but we deduce what this system looks like by seeing what different expressions are well formed and what they mean.
Much previous work that attempts to model such systems posits a rather complicated architecture and one that is difficult to translate into a system that humans actually use in real time. The aim of the research here is to investigate a much simpler picture of the way the system works. The current hypothesis, called "direct compositionality", posits that meanings are derived directly as smaller linguistic expressions (words and phrases) combine together to give larger ones. If this hypothesis is correct, it will help provide an explanation for how it is that people process speech in actual conversational settings immediately and effortlessly, as it is happening. Understanding the processes by which people decode messages also will ultimately help in designing a variety of systems that attempt to simulate human intelligence, including "intelligent" systems for human/machine interactions, as well as systems of machine translation.