This project investigates a coordinated adaptation and network streaming framework for next generation multi-lens stereoscopic video. Future stereoscopic cameras may be made up of multiple (more than two) lenses to provide a denser sampling of viewpoints to ameliorate the mismatch between capture and viewing scenario dependent display, a problem that will grow as stereoscopic devices and displays become more ubiquitous. With an appropriate streaming system, a much richer user experience can be delivered that is both network adaptive and reduces the negative side-effects of stereoscopic imaging such as 3D eye fatigue. A multi-lens stereoscopic video framework will require the confluence of three interrelated components: compression and representation; retrieval, streaming and adaptation; and viewing scenario dependent optimization. These three components will act together to provide a low-latency, viewing scenario optimized experience for stereoscopic imaging.

With this architecture in mind, this project will: * Investigate multi-lens compression technologies to support efficient retrieval and viewing scenario optimization. The compression of the multi-lens stereoscopic data needs to achieve high compression, but at the same time, be amenable to adaptation and subset retrieval. * Develop adaptive streaming techniques for the proposed retrieval-friendly compression format. The key objective is to avoid introducing artifacts (due to the decreased quality) that make their way into the stereoscopic field. This adaptation layer will be coordinated with a viewing scenario dependent optimization in order to retrieve the best data possible while also being network adaptive. * Develop viewing scenario dependent optimal display management technologies. To maximize viewing experience and minimize negative side effects, viewing scenario dependent optimization will select a subset of images from the array to best match the viewing scenario. Novel view synthesis techniques may be used to help minimize the image data retrieved, yet provide good user experience.

Broader Impact: Dynamically delivering optimized stereoscopic content according to the viewing scenario can reduce side effects such as 3D fatigue and headaches that people have from viewing stereoscopic content. The threaded architecture can also be used for other multi-view systems (e.g., video sensor networks), where the display of video requires only a subset of the camera views to be delivered. Finally, the results from the display management work can also be combined with eye tracking to enable free-view and free-glass stereo viewing experiences.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1218589
Program Officer
Darleen Fisher
Project Start
Project End
Budget Start
2012-10-01
Budget End
2016-09-30
Support Year
Fiscal Year
2012
Total Cost
$400,000
Indirect Cost
Name
Portland State University
Department
Type
DUNS #
City
Portland
State
OR
Country
United States
Zip Code
97207