Abstract

Artificial intelligence models are moving design exploration beyond the deterministic rule-based parametric systems by offering new possibilities and expanding the design space, which has become more flexible and adaptive to change. Yet, the fact that AI models are independently learning on their own, raises issues with designers’ control over the process. More recently, models that bridge natural language processing and computer-vision such as Contrastive Language-Image Pre-Training (CLIP) have been integrated into generative deep learning models such as StyleGAN, combining the generative and classification functionalities. This way, to some degree, a certain level of designer’s agency can be attained when using text prompts to modify the generative process, which was the motivation of this work. We investigate here the issue of prototyping a new design system with employing language-based models and deep learning models into an expanded design space towards informing design revision and modification. Our methodology involves experimenting with the targeted deep learning models, prototyping a new framework with language-based models are integrated into the generative process, and testing the prototype by applying the proposed system to a design case. As a result of experimentation, the generative model was modified using a set of text-prompts that describe the intended design alteration. Overall, the results show successful approaches to guiding the generative process and informing design revision, and offer insights into associated potentials and limitations, as discussed in the paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call