Visual learning (VL) is defined as performance enhancement as a result of training on or exposure to a visual feature and is regarded as manifestation of plasticity in visual and brain processing. Since different researchers have tended to use different sets of parameters as their stimuli, tasks, etc in conducting their VL studies, this practice has made it very difficult to make direct comparisons between the findings and to find, if any, general rules of VL. Nevertheless, there have not been strong efforts to organize these divergent results and to identify any possible general rule(s). In the present proposal, we aim to clarify general rules of VL by examining various aspects of VL within the same framework. In all the proposed experiments, we will examine how sensitivity to feature values (e.g., -45, 0?, 45? in orientation) at and around a trained feature value is changed as a result of training (sensitivity tuning function changes). Specifically, we will examine effects of different types of training (detection, discrimination and exposure), time-course of training and feedback to subjects (response feedback, block feedback and incorrect feedback) on sensitivity tuning function shapes. To date, in most studies, effects of the three fundamental factors (training, time-course and feedback) and different sub-factors (e.g., detection, discrimination and exposure in training) on VL have been studied independently without clearly relating to each other. In the current proposal, by systematic investigations of effects of these different factors/ sub-factors on sensitivity tuning function changes, we will test whether these effects may result from common underlying mechanism changes that are reflected by one or both of two component patterns in sensitivity tuning function changes, performance increase at and close to the trained feature value (center increase) and performance decrease within a wider range of feature values (wide-range decrease).
Visual learning is regarded as plasticity of visual and brain processing. The proposed research on visual learning has potential for clinical applications by contributing to scientific knowledge leading to improved diagnosis of, and rehabilitative therapies for, brain disorders and lesions, in particular those related to visual function.
|Sasaki, Yuka; Watanabe, Takeo (2016) V3A takes over a job of MT+ after training on a visual task. Proc Natl Acad Sci U S A 113:6092-3|
|Tamaki, Masako; Bang, Ji Won; Watanabe, Takeo et al. (2016) Night Watch in One Brain Hemisphere during Sleep Associated with the First-Night Effect in Humans. Curr Biol 26:1190-4|
|Shibata, Kazuhisa; Sasaki, Yuka; Kawato, Mitsuo et al. (2016) Neuroimaging Evidence for 2 Types of Plasticity in Association with Visual Perceptual Learning. Cereb Cortex 26:3681-9|
|Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo et al. (2016) Learning to Associate Orientation with Color in Early Visual Areas by Associative Decoded fMRI Neurofeedback. Curr Biol 26:1861-6|
|Watanabe, Takeo; Sasaki, Yuka (2015) Perceptual learning: toward a comprehensive theory. Annu Rev Psychol 66:197-221|
|Berard, Aaron V; Cain, Matthew S; Watanabe, Takeo et al. (2015) Frequent video game players resist perceptual interference. PLoS One 10:e0120011|
|Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho et al. (2015) Real-Time Strategy Video Game Experience and Visual Perceptual Learning. J Neurosci 35:10485-92|
|Å½aric, Gojko; Yazdanbakhsh, Arash; Nishina, Shigeaki et al. (2015) Perceived temporal asynchrony between sinusoidally modulated luminance and depth. J Vis 15:13|
|Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo (2015) Visual perceptual learning by operant conditioning training follows rules of contingency. Vis cogn 23:147-160|
|Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H et al. (2015) Reduction in the retinotopic early visual cortex with normal aging and magnitude of perceptual learning. Neurobiol Aging 36:315-22|
Showing the most recent 10 out of 40 publications