Abstract

This paper examines the prevalence of bias in artificial intelligence text-to-image models utilized in the architecture and design disciplines. The rapid pace of advancements in machine learning technologies, particularly in text-to-image generators, has significantly increased over the past year, making these tools more accessible to the design community. Accordingly, this paper aims to critically document and analyze the collective, computational, and cognitive biases that designers may encounter when working with these tools at this time. The paper delves into three hierarchical levels of operation and investigates the possible biases present at each level. Starting with the training data for large language models (LLM), the paper explores how these models may create biases privileging English-language users and perspectives. The paper subsequently investigates the digital materiality of models and how their weights generate specific aesthetic results. Finally, the report concludes by examining user biases through their prompt and image selections and the potential for platforms to perpetuate these biases through the application of user data during training.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.