In our daily lives, we search for our keys, look for a friend in a crowd, and try to find a button on our remote control. Visual search is an important part of many applications, as well, including search for a possible tumor in a mammogram, or search for a threat in a baggage x-ray. Search has been studied intensely for 30 years, but the results remain puzzling. Some searches are easy, even when the target appears against a background of many "distractor" items. Other searches become quite difficult with many distractors, even when the target and distractors are quite distinct. We lack a computational model that can predict which searches will be easy or hard, or make quantitative predictions of search performance for arbitrary displays. The overall goal of the proposed research is to better understand visual search based on the insight that the capabilities of peripheral vision provide a fundamental constraint on search performance. Peripheral vision enables fast target detection in the periphery (the target seems to "pop out"), and guides eye movements until, ultimately, the observer finds the target. The proposed work builds on recent modeling of peripheral vision. These recent results suggest that peripheral vision processes not individual items, but rather sizable local "patches," which it represents in terms of a rich set of summary statistics (Balas, Nakano, &Rosenholtz, 2009). The proposed research has two intertwined aims.
Aim 1 is to develop and test models of visual search based on the hypothesis that search is constrained by the discriminability of peripheral patches containing a target (and a number of distractors), and those containing only distractors. In particular, Dr. Rosenholtz will examine the extent to which search performance can be predicted by: Peripheral discriminability of (1) individual items (2) larger, crowded patches. (3) Predicted discriminability of target present vs. target absent patches based upon their summary statistic representation;(4) A quantitative model of the fixations required to find a target.
Aim 2 is to test whether a wide range of search phenomena can be accounted for by a single mechanism of peripheral vision. In particular, Dr. Rosenholtz will examine: (1) The predominance of search asymmetries, e.g. that it is easier to search for a 'Q'among 'O's than for an 'O'among 'Q's;(2) Differences between search for a target differing from distractors by a single feature, by a conjunction of features, and by a configuration of basic features;(3) Somewhat puzzling accounts of what constitutes a "basic feature" that can guide search;(4) The effects of grouping on visual search.
Visual search is a near-ubiquitous task in our daily lives, and understanding it will clarify more generally the processes by which we constantly move our eyes to piece together information about the world. In addition, understanding visual search will elucidate representations and performance of normal human vision. Successfully modeling visual search will shed light on important search tasks such as finding a tumor in a mammogram, and will enable improved design of low-vision aids for older adults and the visually-impaired.
|Ermis, Menekse; Akkaynak, Derya; Chen, Pu et al. (2016) A high throughput approach for analysis of cell nuclear deformability at single cell level. Sci Rep 6:36917|
|Chang, Honghua; Rosenholtz, Ruth (2016) Search performance is better predicted by tileability than presence of a unique basic feature. J Vis 16:13|
|Akkaynak, Derya (2014) Use of spectroscopy for assessment of color discrimination in animal vision. J Opt Soc Am A Opt Image Sci Vis 31:A27-33|