The meaning of a word is independent of the sensory origin of the language signal, and comprehension scores for spoken and visual text presentations are nearly identical in college-age readers. The current research examines one specific hypothesis, which we refer to as the universal access hypothesis, according to which a modality-independent sound code is used to retrieve a word's meaning. Our work differs from the vast majority of prior work in this area in content and methodology. The research examines whether a relatively abstract phonemic code is used to retrieve word meaning, which could be obtained by some type of grapheme-to-phoneme conversion process, or whether an acoustically detailed speech-based phonetic code is used. A novel method is used to present a spoken word to the reader while a specific visual target word is viewed. Phonemic and phonetic properties of the spoken word are manipulated and the effects of these manipulations on eye movements during target viewing are recorded. According to our universal access hypothesis, phonetic and phonemic sound properties of the spoken word should influence visual target recognition during reading, in so far as they have been shown to influence spoken word recognition. Results from the proposed work could provide insight into the source of reading impairments which are often accompanied by inadequate phonemic or phonetic skills.