The long-term objective of this research is the development of quantitative models of how top-down linguistic knowledge and bottom-up sensory information are combined in spoken word recognition. The research proposed here would provide quantitative measures of the effects of contextual knowledge on the recognition of English monosyllables by modeling whole word recognition rate as a function of phoneme recognition rate. Such models have important applications in helping hearing-impaired individuals understand speech and in the development of machine-based recognition technologies. Preliminary results from a word recognition in noise study quantitatively evaluated the facilitative context effect of scarcity of similar sounding words (i.e., sparse phonetic neighborhoods). These results suggest that neighborhoods defined by word beginnings are more relevant than those defined by word endings. Two experiments are proposed to further these findings and provide the foundation for future investigation of other context effects and longer stimuli. Experiment One compares the contribution to recognition of the beginning and end of words varying in similarity to other words using a variable masking technique. Experiment Two is a complementary study that will test for effects depending on the temporal distribution of phonological information using an auditory prime. In addition to their use in evaluating specific hypotheses based on current theories, the quantitative results from these studies should be useful for evaluating future conceptions of spoken word recognition as well.