In this project, the PI will lead an interdisciplinary team in a quest for fundamental insights into how to effectively harness the full potential of virtual reality technology for visualization and design. The specific focus of the research will be on architectural applications. Within this context, the PI will seek to enable accurate and intuitive spatial understanding of large, immersive 3D virtual environments presented both via head mounted displays and on large projection screens. For environments presented via a head mounted display, the PI will explore how the accuracy of the user's spatial perception and sense of presence is affected by factors such as: providing a faithful representation of the user's body; providing low latency visual and haptic feedback about the sizes and distances from the user of tracked objects that co-exist in both the real and virtual environments; and providing spatialized 3D ambient sound cues. For information presented via a stereoscopic large screen rear-projection system, the PI will explore how the viewer's ability to attain a maximally accurate intuitive spatial understanding of an interior or exterior is impacted by questions such as: the importance of presenting people with an image of a scene that is generated from a viewpoint that is as close as possible to their own eye position, both laterally and in distance from the ground; the conditions under which the viewer is likely to adopt an interpretation of size and distance relationships in a virtual environment shown via a projection system that is based on the assumptions that underlie interpretations of size and distance in pictures, as opposed to interpretations of size and distance in directly viewed scenes; and whether, when considering display on a large screen, bigger is always better, or if the maximum benefit comes from displaying a scene at "life size" with the possibility that negative consequences might arise from displaying things too large. The PI will further explore the design and evaluation of improved metaphors for enabling intuitive locomotion through very large scale immersive virtual environments, as well as the effective use of abstraction for the representation of uncertain or ambiguous information in such environments.

Broader Impacts: This work will lead to basic observations and rules of thumb derived from careful human subjects experiments that will inform a broad range of research efforts involving the use of virtual environments, in such domains as scientific and information visualization, situational awareness, and diversity training. It will also result in improvements in architectural education stemming from the effective use of virtual environment technology to teach fundamental concepts in visual imagination and integration of an egocentric perspective into the earliest stages of the design process.

Project Report

Immersive virtual environments technology has tremendous potential to enable fundamental and transformative advances in a wide variety of areas – from education and training, to psychotherapy and rehabilitation, to architectural design and scientific data visualization – by allowing people to experience a computer-mediated virtual representation of reality and interact within it as if it were actually real. In this project, an associate professor of computer science and an associate professor of architecture teamed up to carry out an extensive program of basic and applied research aimed at addressing one of the most basic fundamental challenges in VR today, namely: how to better enable people to cognitively and perceptually interpret what they see in an immersive virtual environment (e.g. while wearing a head mounted display device) as if they were actually there and seeing it in reality. The first key focus of our efforts was on developing strategies for enabling accurate spatial perception in immersive VR, so that, for example, architects and their clients could make reliable design decisions about the spatial layouts of prospective building models, including details such as room sizes and ceiling heights, based on their first-person experience of a digital model of the designed structure. A historical impediment to such capability has been the longstanding observation that people tend to perceive 3D space as being somehow compressed in virtual reality, so that all points in space are systematically judged to be closer to the observer than they really are. One of the most significant findings from our research was the discovery that this problem can be ameliorated by allowing people to experience the virtual environment while embodied in a fully tracked, fairly realistic, first-person self-avatar. The basis for this finding came from research in embodied cognition, which essentially suggests that our perception of the world is inextricably bound up with our affordances for action and interaction with our environment. By extending the capabilities of virtual reality technology to better support the illusion that one is actually physically present in the virtual world, as opposed to merely viewing it as a disembodied observer, we were able to allow people to make judgments of 3D distances in virtual building models that were closer to the judgments they made of the same spaces in reality. Additionally, our research addressed the problem of how best to enable people to achieve an accurate, intuitive understanding of the 3D spatial layout of a virtual environment that is too large to completely fit within the confines of the space that is physically available to a person who is wearing the VR equipment. Previous research has found that people tend to become more easily disoriented when exploring a virtual environment while sitting and using a joystick to virtually control their point of view than when actually walking around. In this project, we worked on the development of a controller for a motorized wheelchair that people could drive around in a moderately-sized room and determined perceptual limits on the extent to which we could surreptitiously steer people away from the walls of a room while allowing them to experience the illusion that they were freely exploring a large remote site. Such capability has useful potential application for a wide range of purposes from site planning to archaeological data understanding. Our work has won widespread recognition within the field of virtual reality and beyond, and led to the publication of 15 journal and conference papers which have already been cited a total of over 90 times. Our research efforts provided funding and training opportunities for a total of five graduate students and seven undergraduate students, and led to collaborative activities with a wide range of industrial partners, from local architectural firms to large construction companies, and academic partners in computer science, psychology, and architecture from as far away as Germany and Norway. Our lab has additionally been active in a number of outreach efforts that aim to inspire k-12 students, including girls and others from underrepresented groups, to pursue studies in STEM fields such as computer science, which are critical to developing the technology that will build a brighter future for our nation.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0713587
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2007-09-01
Budget End
2012-08-31
Support Year
Fiscal Year
2007
Total Cost
$462,458
Indirect Cost
Name
University of Minnesota Twin Cities
Department
Type
DUNS #
City
Minneapolis
State
MN
Country
United States
Zip Code
55455