DeepSpaceBIM – AI @ Construction Sites

“DeepSpaceBIM 4.1 – Digitaler Bauassistent der Zukunft” is a project funded by the Federal Ministry of Transport and Digital Infrastructure (BMVI, grant no. 19F2057E, November 2018 – May 2021). It aims at creating a "digital construction assistant" in the form of an experimental development with the overall aim to make the complexity of large-scale construction projects more manageable. To this end it exploits modern information technologies like AR – Augmented Reality, BIM – Building Information Modeling, mobile point cloud capturing, and AI – Artificial Intelligence.

TU Darmstadt’s Centre for Cognitive Science participates with its research group Artificial Intelligence and Machine Learning (Kristian Kersting). The project consortium comprises M.O.S.S. Computer Graphic Systems, Robotic Eyes, DMT, Steinmann Kauer Consulting, Drees & Sommer, and Technical University of Darmstadt (TU Darmstadt).

The Use of 3D Point Clouds in Construction Sites. One objective of the project is the development/improvement of a mobile, handheld device by the partner DMT for capturing accurate 3D point clouds and for precise position measurements (“Pilot 3D” tool). TU Darmstadt investigates the use of such 3D point clouds to track specific aspects of the construction site progress. In order to identify potential construction irregularities and flaws we work towards intelligent algorithms able to compare the as-is construction status given by point cloud footage with the planned status given by a BIM model, for instance.

Progress Detection at Construction Sites using Point Clouds. Understanding 3D scenes is a very useful but challenging task that requires intensive time for semantic instance labeling. Most of the existing databases for this area of research have limited amounts of instances and only concentrate on popular class labels. Especially for construction sites, where the objects to be recognized such as material, equipment and tools vary widely in size, shape and model, it requires an extreme effort to create an up-to-date database for 3D semantic segmentation.

We propose an algorithm that aims at semantic labeling of differences between two point clouds in a scalable way [1]. In the attempt to track the progress on a construction site this approach is beneficial since typically the changes signal the spaces of progress. The difference between point clouds provide a direct first segmentation, leaving us “only” with the task of labeling the segmentation of interest. The main advantage of our approach is that we are not reliant on a dataset of labeled point clouds but instead can source the training data (consisting of images) purely based on the names of the classes and names of the objects contained in them. This enables us to easily label any object of interest, whose size is large enough to be detected when comparing two point clouds without the need for expensive data collection and labeling process.

Figure 1: Comparison two roughly captured point clouds of a room. Left: change detection – detected new points in a target cloud compared to the source. Right: segmentation – color-coded segmentation of the point cloud difference, with a detected new object marked in yellow.

Hole Detection in Point Clouds. During the construction of a building, doors and windows are usually installed late in the building process and require accurate corresponding wall openings in terms of size, shape and position. The later for example a misplacement is detected the more expensive it becomes to fix it.

In this part of the project, we aim to use advances in machine learning to detect wall openings, i.e. “holes” in 3D point clouds to compare their shape, size and position with the specification given by the digital building model to verify the accuracy of window openings and doorways. Although there are multiple methods for automatic detection of holes based on persistent homology theory, the automated detection of holes using deep neural networks still is new and a promising approach for our ongoing research.

Contacts:

Cigdem Turan, Dirk Balfanz

References:

[1] Marten Precht (2019): Image-based semantic change detection in 3D point clouds using automatically sourced training data. Master Thesis. Supervisors: Prof. Dr. Kristian Kersting, Dr. Cigdem Turan