3D Scene Digitization A Novel Invariant Approach for Large-Scale Environment Capture Daniel G. Aliaga, Mireille Boutin, Carl Cowen Purdue University The simulation of large real-world environments is a core challenge of computing technology today. Applications are numerous and diverse. For example, it would enable students to pay virtual visits to famous historical sites such as museums, temples, battlefields, and distant cities; civil engineers to capture buildings and compare them to the original design or to simulations (e.g., to compare as-built models and simulated models before and after a catastrophe); archeologists to virtually preserve complex excavation sites such as trenches as they evolve over time; soldiers and fire fighters to train in simulated environments; real estate agents to show buyers the interiors of homes; and, people all over the world to enjoy virtual travel or multi-player 3D games. Despite tremendous increases in computational power and storage space, current acquisition methods perform quite poorly. Even for small scenes, they usually fail to adequately capture many details. Manually created models, although popular, are extremely time-consuming and rendered images are poor representations of reality. Alternatively, image-based modeling and rendering, produces photorealistic images but only in the context of small and/or diffuse environments seen from a limited range of viewpoints (e.g., QuickTime VR). Similarly, approaches which focus on recreating the geometry of the scene such as the reconstruction methods developed in computer vision or the laser-scanning approaches struggle with complex occlusions, specular surfaces and large data sets. The research objective of this proposal is thus to develop the algorithms needed to capture and manipulate visually rich computer models of large and complex real-world scenes. The proposal attacks this research problem with a new hybrid method combining both geometric and photometric information contained in the scene. More precisely, the approach captures a 3D environment by densely sampling the space of viewpoints and uses this redundant data set to extract accurate models of the surface geometry and the reflectance properties of the scene. This is in contrast with most current approaches where one acquires a sparse set of data and uses methods to interpolate missing information. The work replaces interpolation by the easier tasks of semi-automatic platform navigation, data filtering, and working-set management. The key is the development of highly effective mathematical data processing techniques. The main research contribution of the proposed approach is the merging of expertise from the Mathematical and Computer Sciences to solve a difficult problem in computing technology today. In particular, the research makes use of a novel geometry reconstruction method based on Lie group theory which was recently developed by one of the co-PIs. This method uses a set of invariants of a group action to eliminate a number of superfluous unknowns normally included in the 3D reconstruction problem. These superfluous unknowns are exactly the ones that make the reconstruction equations nonlinear. By removing them, the method ends up with a simple set of sparse linear equations involving a minimum number of unknowns which can be solved sequentially. This allows the project to quickly and robustly extract the geometric (and photometric) information of large data sets and reconstruct large 3D environments. The proposed research will have impact beyond the immediate reconstruction results. Never before have researchers had access to such large and dense samplings of environments. Aside from publications and making all software available, the research project will create a public repository to store models for subsequent study (e.g., historically significant locations). The impact of the proposed work is not an incrementally better method for capturing environments, but a bold new approach that can significantly change how people think about computer simulation of large environments.