This award is funded under the American Recovery and Reinvestment Act of 2009 (public Law 111-5).

Building information models (BIMs), which represent the three dimensional (3D) geometry and high-level semantics of a facility, are increasingly used in the Architecture, Engineering, Construction, and Facility Management (AEC/FM) industry. Most BIM work focuses on representing the as-designed conditions of a facility, but the actual as-built or as-used conditions can differ significantly from the design due to changes during construction or renovations. Currently, the utilization of as-built BIMs is limited because they are difficult and time-consuming to create and because existing BIM standards do not fully support representing as-built conditions. This research will address these barriers by developing algorithms to automate the creation of as-built models from point cloud data collected using laser scanners and by developing new representations that support the needs of BIM stakeholders. The modeling objective will focus on three aspects of the points-to-BIM transformation process: Geometric modeling, in which raw points are segmented into geometric components, such as planar regions, and modeled parametrically (e.g., plane parameters and boundaries); Semantic labeling, in which modeled components are assigned meaningful labels, such as "wall" or "ceiling"; and occlusion inference, in which surfaces that are not visualized are estimated based on the geometry of visible surfaces. The representation objective will focus on two aspects of the problem of representing as-built BIMs: levels of detail addresses the difficulty of handling large 3D point sets inherent in as-built models. Representations will be formalized that support multiple levels of detail, which will enable efficient high-level analysis, while supporting detailed analysis down to the level of raw data points. Metadata representation targets development of descriptions of how information is derived from raw data and how raw data is collected. Approaches will be formalized to support the representation of secondary data, such as deviations from idealized models, missing data due to occlusion, and sensor configuration and placement. Taken together, these objectives comprise an end-to-end approach to streamline the points-to-BIM conversion process and it is likely that it transform the current way of using/leveraging BIM and 3D imaging technologies. Evaluation of these approaches will be conducted using laser scan data from different types of scanners used in case studies generated by our group and by our collaborators.

This research is expected to transform the way that BIMs are created and utilized. The algorithms and representation strategies developed under this research are intended to drastically simplify process of creating of as-built BIMs and will create new opportunities of analyzing and utilizing BIMs during construction and facility management. The reverse engineering aspects of this research will also advance the general area of generic 3D scene interpretation, with impact in diverse domains, including robotics (e.g., creating building models for indoor mobile robots), building safety (e.g., automatic mapping of buildings for first responders), and construction site monitoring. This research will be incorporated into existing Carnegie Mellon courses as well as a new project course on as-built BIMs, to be co-taught by the PIs. The visual nature of the project lends itself to inclusion in K-12 and minority outreach programs in which the team participates. We plan to make the products of this research available via the Internet, including data sets and software, which is beneficial, since 3D data sets of this kind are not generally available and are difficult/costly to create.

Project Start
Project End
Budget Start
2009-08-01
Budget End
2013-07-31
Support Year
Fiscal Year
2008
Total Cost
$439,990
Indirect Cost
Name
Carnegie-Mellon University
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213