Computers are getting faster, but users' ability to exchange information with them is a major bottleneck. Applications requiring three-dimensional spatial input, including CAD/CAM, medical visualization, weather visualization, and simulation & training, need better ways to allow humans to interact with them. This research will develop "beyond the keyboard & mouse" interfaces to spatial tasks through the use of: 1) both hands in combination with voice input, 2) using one hand relative to the other in free-space, 3) using hand-held "prop" input devices used in 3d space. This work will use helmet-based virtual reality (VR) systems as a research workbench. It is unlikely that VR hardware will be widely used, but this approach forces designers to think "beyond the desktop." New interface techniques developed with this approach will then be adapted to a more traditional desktop setting, continuing to use physical props and head tracking and/or stereo glasses. While this work will develop a number of specific interaction techniques, the real goal of the research is to develop a set of general design principles for spatial tasks, much as there is now a well-understood set of design rules for 2D "desktop" interface design.