Video content, such as television, movies and the internet, are a major source of information and entertainment. People with impaired vision often have difficulty using these media. Central vision impairment, from conditions such as age-related macular degeneration and diabetic macular edema, necessitates use of eccentric vision. Eccentric vision is impaired by a loss of spatial resolution, an elevation of crowding and difficulties making and maintaining eye movements. Currently, rehabilitation for television viewing is very limited, and most people with central vision impairment express dissatisfaction with their viewing experience. Magnification is the most effective form of rehabilitation for central vision impairment;however, simple electronic magnification causes restriction of the field of view. This can cause a loss of information and context, and diminishes the experience. We will develop and evaluate three novel methods of modifying electronic moving images (video) to assist people with central vision impairment. All three methods are based on magnification, but also employ dynamic and intelligent methods of reducing the loss of context, and are combined with techniques to assist eye movement control. By intelligent, we mean that the magnification method makes use of information about the moving image to vary the way that the image is magnified. That information may come from the viewer, the viewer's eye movements, the eye movements of other viewers, or inherent features of the moving image itself. Image-processing techniques will be employed to extract such information from the moving images, and to modify the moving images that are displayed. Hemianopia, a complete loss of vision on one side in both eyes caused by a brain injury, also causes difficulties viewing video content. A major role of peripheral vision is to alert us to the presence of objects of interest;however, if they fall o the blind side, they are not seen. People with hemianopia express dissatisfaction with their viewing experience, their eye movements are often biased to one side of the screen, and they acquire less information from videos. To help people with hemianopia, we will develop and evaluate a novel assistive method wherein the viewer is guided to objects of interest. This method will provide the awareness of objects that otherwise would have been seen by peripheral vision. This project is a collaboration between Russell Woods, Peter Bex, Eli Peli and Daniel Saunders, at the Schepens Eye Research Institute, Massachusetts Eye and Ear.
Video content, such as television and movies, are major sources of entertainment and information. As people with vision impairments often have difficulty viewing video content, we will develop and evaluate three novel assistive visual aids fo people with impaired central vision, and one novel visual aid for people with hemianopia (loss of vision on one side in both eyes).
|Saunders, Daniel R; Woods, Russell L (2014) Direct measurement of the system latency of gaze-contingent displays. Behav Res Methods 46:439-47|
|Saunders, Daniel R; Bex, Peter J; Rose, Dylan J et al. (2014) Measuring information acquisition from sensory input using automated scoring of natural-language descriptions. PLoS One 9:e93251|
|Han, Peng; Saunders, Daniel R; Woods, Russell L et al. (2013) Trajectory prediction of saccadic eye movements using a compressed exponential model. J Vis 13:|
|Saunders, Daniel R; Bex, Peter J; Woods, Russell L (2013) Crowdsourcing a normative natural language dataset: a comparison of Amazon Mechanical Turk and in-lab data collection. J Med Internet Res 15:e100|
|To, Long; Woods, Russell L; Goldstein, Robert B et al. (2013) Psychophysical contrast calibration. Vision Res 90:15-24|
|Satgunam, Prem Nandhini; Woods, Russell L; Bronstad, P Matthew et al. (2013) Factors affecting enhanced video quality preferences. IEEE Trans Image Process 22:5146-57|
|Woods, Russell L; Satgunam, Premnandhini (2011) Television, computer and portable display device use by people with central vision impairment. Ophthalmic Physiol Opt 31:258-74|