We cannot fully perceive all of the visual information our eyes can take in at once. If we could, beginning typists would not need to hunt for the C key; they could simply look at the keyboard and know that the C is the third key from the left in the bottom row. That doesn't happen. Even though the C is in plain sight, novice typists have to search from key to key until they find the one they need. Before finding the C, the typists see something at the location of the C; they just have not fully perceived it as the letter C. This example illustrates that vision can be broadly divided into two stages. First, basic features like color, motion, and size are perceived across the entire visual field at the same time. (There are, perhaps, a dozen of these `pre-attentive` basic features.) Then, as attention is directed to a particular region of the visual field, the features in that region are bound together into representations of objects. Thus, when we first look at a scene, what we see is a collection of pre-attentive features. Only when attention is deployed to an item are those features perceptually bound together into a recognizable object. In this project we are studying `post-attentive vision,` asking what happens after attention moves away from an object. Do the features remain bound together? The traditional answer has been `yes.` However, our preliminary data suggest that, to the contrary, when attention leaves an object, the visual representation reverts to its pre-attentive state of unbound features. For example, when people search repeatedly through a visual display of individual letters for a target letter (e.g., Is there an E?; Is there a G?) one might think the search would become increasingly efficient as attention was directed to more and more of the letters in the display. However, this does not happen; the search is apparently no more efficient on trial 100 than on trial 1. Building upon these and other recent findings, we will conduct further tests of the hypothesis that people do not build up a visual percept over time, and we will also explore several issues arising from this hypothesis. The results will speak to basic questions about the nature of visual perception, and will also have practical implications for our understanding of real world tasks involving visual search (e.g., driving, interpreting X-rays, reading).