Is Language All We Need? A Query Into Architectural Semantics Using A Multimodal Generative Workflow
This project examines how an interconnected artificial intelligence (AI)-assisted workflow can help overcome the limitations of current language-based models and streamline machine-vision related tasks for architectural design. A precise relationship between text and visual feature representation is problematic and can lead to “ambiguity” in the interpretation of the morphological/tectonic complexity of a building. Textual representation of a design concept only addresses spatial complexity in a reductionist way, since the outcome of the design process is co-dependent on multiple interrelated systems, according to systems theory (Alexander 1968). We propose herewith a process of feature disentanglement (using low level features, i.e., composition) within an interconnected generative adversarial networks (GANs) workflow. The insertion of natural language models within the proposed workflow can help mitigate the semantic distance between different domains and guide the encoding of semantic information throughout a domain transfer process.
Keywords: Neural Language Models, Generative Adversarial Networks, Domain Transfer, Design Agency, Semantic Encoding