EDIT_THIS ADD_ARCHIVE ADD_ISSUE ADD_ARTICLE PUBLISH ?

Text-to-Form (2020)

article⁄Text-to-Form (2020)
contributor⁄
abstract⁄Traditionally, architects express their thoughts on the design of 3D architectural forms via perspective renderings and standardized 2D drawings. However, as architectural design is always multidimensional and intricate, it is difficult to make others understand the design intention, concrete form, and even spatial layout through simple language descriptions. Benefiting from the fast development of machine learning, especially natural language processing and convolutional neural networks, this paper proposes a Linguisticsbased Architectural Form Generative Model LAFGM that could be trained to make 3D architectural form predictions based simply on language input. Several related works exist that focus on learning texttoimage generation, while others have taken a further step by generating simple shapes from the descriptions. However, the text parsing and output of these works still remain either at the 2D stage or confined to a single geometry. On the basis of these works, this paper used both Stanford Scene Graph Parser Sebastian et al. 2015 and graph convolutional networks Kipf and Welling 2016 to compile the analytic semantic structure for the input texts, then generated the 3D architectural form expressed by the language descriptions, which is also aided by several optimization algorithms. To a certain extent, the training results approached the 3D form intended in the textual description, not only indicating the tremendous potential of LAFGM from linguistic input to 3D architectural form, but also innovating design expression and communication regarding 3D spatial information.
keywords⁄2020archive-note-no-tags
Year 2020
Authors Zhang, Hang.
Issue ACADIA 2020: Distributed Proximities / Volume I: Technical Papers
Pages 238-247.
Library link N/A
Entry filename text-to-form