A central problem of advanced computational science is making large data sets and the results of large simulations understandable by scientists and engineers. The best-known means of imparting this understanding is visualization, which allows users to see the data. However, the complexity and nature of some data makes it appropriate to represent using the other senses. For example, shape can be determined by touch. This project studies multisensory virtual environments for large--scale complex scientific computing problems. To better focus the research efforts and validate the results, it uses multimodal interaction with complex big-molecular datasets as the driving application. It aims to devise techniques and tools for meaningful interactions with very large biomolecular data sets in virtual environments with visual, aural, and haptic (force) feedback. Most biomolecular datasets consist of a few primitives (such as atom types, ammo acid sidechains, and nucleic acid bases) that are combined to yield rich mosaics of biological complexity. The researchers believe this is rich enough to be representative of other scientific visualization sub-disciplines and yet specialized enough to offer a well-defined research target.

The research includes multimodal feedback using perception-dependent multiresolution rendering on sequential and parallel architectures. The proposed advances optimize the multimodal rendering processes using a set of three characteristics common across most biomolecular datasets - limited motif complexity, bounded accuracy, and variable level of perceptible detail. Most scientific visualization datasets share at least two of these characteristics and therefore advances here should further the cutting-edge research in other areas of virtual-reality-assisted scientific computing. Here, the researchers will use hierarchical pre-rendered representations of common motifs to allow realism at interactive speeds in multimodal contexts. They will also take advantage of processor support for variable resolution integer arithmetic and the limited dynamic range and bounded input accuracy of biomolecular data to transparently speed up multimodal rendering by trading computational accuracy for speed. Finally, they will take advantage of the fact that people's senses have thresholds of the least perceptible stimuli, and that they typically perceive high details in only a limited region, to explore techniques to address variable level-of-detail renderings in an integrated and coherent framework across multiple senses. The researchers will validate the efficacy of proposed approaches on real-life datasets through a collaboration with two on-campus biochemistry researchers working on protein folding and rational drug design and other visiting scientists and scholars.

Agency
National Science Foundation (NSF)
Institute
Division of Advanced CyberInfrastructure (ACI)
Application #
9812572
Program Officer
Xiaodong Zhang
Project Start
Project End
Budget Start
1999-01-01
Budget End
2002-02-28
Support Year
Fiscal Year
1998
Total Cost
$262,609
Indirect Cost
Name
State University New York Stony Brook
Department
Type
DUNS #
City
Stony Brook
State
NY
Country
United States
Zip Code
11794