BIM Hyperreality (2020)
article⁄BIM Hyperreality (2020)
abstract⁄Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid systemusing both building information modeling BIM and hyperrealistic photorealistic renderingto synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network GAN model using these methods, and tested the output model on realworld photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data i.e., photorealistic renderings and BIMbased semantic labels can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
|
|
Year |
2020 |
Authors |
Alawadhi, Mohammad; Yan, Wei. |
Issue |
ACADIA 2020: Distributed Proximities / Volume I: Technical Papers |
Pages |
228-236. |
Library link |
N/A |
Entry filename |
bim-hyperreality |