Abstract

GENERATIVE AI art has exploded onto the scene over the past few months through advanced online platforms like DALL-E2, Midjourney and Stable Diffusion, which enable anyone with access to a smartphone or PC to create highly polished art by typing in simple text instructions. The technology can bring outlandish and otherworldly creations to life in super-realistic detail. Type in `Cookie Monster climbing the Shard' and you'll see the children's TV character incongruously scaling the tower. Type `Taylor Swift commanding a legion of the undead' and a disturbing image of the pop star will appear as if conjured from the bowels of hell itself. The concept of using AI to make art might seem revolutionary, but experiments programming computers to mimic human creativity in fact date back several decades. OpenAI has refused to share the image data DALL-E 2 was trained on, but Stable Diffusion's code is open-source, and it shares details of the database of images used to train its model. As these tools become ubiquitous and as their capabilities to produce realistic images become more advanced, it will become ever more difficult to accurately and reliably identify which images are `real', and which are generated by AI. This leads to significant risks to democracy, both through the potential for fake images to be reported as real, and through increasing scepticism about the authenticity of real images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call