This infrastructure project will develop an open source software toolkit, called OpenMR, to support building "mixed reality" data analysis systems that project data into the physical world using a new class of display devices such as Microsoft Hololens and Oculus Rift. Through OpenMR, these lightweight, wearable, mobile devices will tap into data-intensive infrastructures hosted in the cloud, with the goal of developing systems that allow users to perform data-intensive tasks from anywhere, without requiring heavy dedicated large-format displays supported by dedicated local computers. To pursue this research, the investigators will acquire both dedicated cloud-computing servers (to support data analysis) and mixed reality hardware devices (to create the interfaces). They will develop OpenMR to connect this hardware, to support common analysis tasks such as selecting, filtering, and classifying data, and to create data displays in the physical world. To both demonstrate the toolkit and advance data analysis research, they will build a number of prototype mixed reality interfaces for researchers whose work requires analyzing a large amount of data in domains including weather, biology, and medical imaging. In addition to advancing those specific research areas, studying these prototypes with real users will support research around the underlying data analysis techniques, the cognitive science of how people interact with data in the physical world, and the design principles needed to build mixed reality systems. This, in turn, will make these emerging technologies more likely to succeed and spread, and increase the chance of finding potential 'killer apps' for these systems. The infrastructure will also directly support education and research at the partner universities around data visualization, computer graphics, computer vision, and machine learning, while the release of the toolkit will benefit the wider community. This research is timely and important because as smart devices, in particular virtual and mixed reality devices such as Google Glass, Microsoft Hololens, Oculus Rift and Google Cardboard, become commonplace, these devices will play an increasingly important role relative to traditional laptop and digital computers when interacting with digital information.

The long-term vision of the project is to develop a mixed reality research infrastructure to support everywhere data-centric innovations, providing immersive, intuitive, location-free, advanced machine learning, data analysis, reduction, summary and storage tools. This includes advanced support for the full pipeline of data-centric work in mixed reality spaces through the OpenMR open source toolkit, including front end visualization and interaction that leverages awareness of available rendering spaces and hardware along with effective visualization patterns in 2D and 3D spaces to optimize interaction; key components of data analysis and machine learning on the middle layers including automatic, generic feature engineering and joint optimization of classification performance and effective identification of discriminating features; and high-performance computing and cost-sensitive job management on the server. The team will evaluate OpenMR's efficiency, stability, scalability, functionality, flexibility, and ease of adoption through a number of mechanisms, including self-evaluations and documentation of the design process, review from domain experts, and evaluation with both expert and novice users on data analysis tasks that cur across the specific application domains described above. The toolkit itself will be released on the GitHub open source platform during the third year of the project after it has reached an initial level of maturity and usefulness. The investigators will publicize OpenMR through a Youtube channel with a set of demonstration videos; outreach to relevant researchers interested in immersive visualization, visual analytics, multi-sensory human-computer interaction, machine learning with human-in-the-loop, and high-performance computing; and collaboration with undergraduates in the Students, Technology, Academia, Research, and Service Computing Corps consortium.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1629913
Program Officer
Balakrishnan Prabhakaran
Project Start
Project End
Budget Start
2016-10-01
Budget End
2021-09-30
Support Year
Fiscal Year
2016
Total Cost
$493,140
Indirect Cost
Name
University of North Carolina at Charlotte
Department
Type
DUNS #
City
Charlotte
State
NC
Country
United States
Zip Code
28223