Abstract

This paper aims to verify whether Generative Artificial Intelligence tools for image generation replicate biases and social stereotypes present in the dominant paradigm. A case study was carried out using the Leonardo.Ai tool, which generated images using simple combined terms, namely: “Scientist, person”; “Cook, person”; “Doctor, person”; “CEO, person”; “Housekeeper, person”; and “Nurse, person”. The images were analyzed using Rodrigues’ (2007) image documentary analysis methodology and Gemma Penn’s (2008) contributions. The analysis criteria included gender, age group, ethnicity, body type, clothes, and circumscribed elements. The images generated by the Leonardo.Ai tool were found to have a series of characteristics that perpetuate bias and social stereotypes. Given the likely impact that generative Artificial Intelligence will have on the production and sharing of information, we conclude that, in addition to the ethical issues related to the creation of the tool itself, there is a need to discuss ways of making it more inclusive and responsible for the representation of information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call