The research objective is to investigate methods to recover geometry and perform spatial reasoning in rooms. This project aims to recover the room space, illumination, and object layout from an image. Together, these elements capture the layout of the room walls, the location of objects in the image and the 3D space, and a lighting representation that allows illumination artifacts to be explained and rooms to be relit with inserted objects. The work takes an integrated approach, exploiting constraints within and between spatial representations. The project also aims to leverage knowledge of room geometry to better reason about surface utility, enabling advanced spatial analysis of indoor scenes.
This research unifies ideas from geometry, multiple view computer vision, shading, and statistics to recover complex spatial representations from single views. The work further aims to create tools for object insertion and removal and scene completion, allowing the average person to more easily create the photograph that she wants or an interior designer to quickly sketch a photorealistic prototype of a new concept. The recovered spatial information also enables mobile robots to find walkable paths through cluttered rooms and to understand how objects can be physically manipulated and placed, which is essential for assistive household robotics. Other anticipated applications include surveillance, security, and transportation safety. The project contributes to education through student projects, course development, and workshops and tutorials involving a broader audience.