It can be argued that the primary adaptive function of memory is not to remember the past, but to predict the future. Knowledge of past experiences combined with an understanding of the present state gives organisms the ability to anticipate future rewards or avoid impending danger. The goal of the proposed research is to develop a mathematical theory describing how people use past history and a representation of the present to predict the future. The theory will be built on a mathematical model of how the history leading up to the present moment can be efficiently compressed into a representation that could be maintained by the brain. The investigators pursue the development of the theory using three techniques. First, in order to see if human learners behave in a way consistent with the hypothesis, the investigators will conduct a series of behavioral experiments using undergraduate students as research subjects. The experiments present the subjects with a series of symbols chosen according to a hidden sequence and ask them to predict the symbols that will follow at various stages. Second, the investigators will conduct computer simulations to train the equations on a large body of naturally-occuring language. Language has a rich temporal structure defined by the way words, and combinations of words, follow one another. Third, the investigators will work to extend the mathematics of the hypothesis to enable it to describe a wide range of phenomena in learning and memory. The large scale goal is to reorient several subfields of cognitive psychology---episodic memory, semantic memory, conditioning, and interval timing---around an understanding of how temporal history is represented and utilized by the human brain.
If successful, the proposed research could have far-reaching practical impacts. It could provide insight into how children and adults learn, leading to better instructional tools. If successful, it would also represent a large step forward in completely automated natural language processing. The rise of electronic communication has led to vast quantities of text---more than could ever be read by even a small army of human readers. Algorithms that can extract knowledge from large quantities of text currently find use in applications as far-ranging as essay grading and other educational applications to intelligence uses. The investigators anticipate that the equations will be much better at extracting knowledge from natural text than several widely-used algorithms. Finally, there may be useful technologies that exploit the ability to predict the future from the past and the present with the level of efficiency that humans can.