Abstract

In previous writing I’ve described what has arguably become the most widely cited theory of generative art. Based on notions from complexity science, and in particular Murray Gell-Mann and Seth Lloyd’s notion of “effective complexity,” I argue that generative art is not a subset of computer art. Rather, generative art turns on the use of autonomous systems and the artist ceding control to those systems. As part of this theory for generative art, I’ve introduced a series of problems. These are not problems in the sense that they require single correct solutions. Rather they are questions that the artist will consider when making a piece; that critics and historians will typically address in their analysis; and that insightful audience members will ponder. They are problems that typically offer multiple opportunities and possibilities. It is notable that, for the most part, these problems equally apply to both digital and non-digital generative art; to generative art past, present, and (it is believed) future; and to ordered, disordered, and complex generative art. In addition, these same problems or questions are generally trivial, irrelevant, or nonsensical when asked in the context of non-generative art. In a sense the applicability of these questions can cleanly divide art into generative art and non-generative art. More importantly, the exploration of these questions can illuminate the analysis and critique of generative art. More recently a new form of neural-network-based artificial intelligence called “deep learning” has appeared on the scene. Deep learning has been applied to digital art creation. In this paper I explore whether the problems in generative art noted above hold up well in this new artificial intelligence context for generative art. The conclusion reached is that our current complexity-based theory of generative art can easily assimilate the use of deep learning.

Highlights

  • In 2003 I wrote a paper that laid out the core ideas for a theory of generative art using notions from complexity science as a context (Galanter 2003)

  • I suggested that generative art is created when an artist cedes some degree of control to an autonomous system that creates, or is, the art

  • The goal here was to determine whether deep learning AI-based generative art would comfortably fit within generative art theory that is based on the artist ceding control to autonomous systems for the creation of art

Read more

Summary

INTRODUCTION

In 2003 I wrote a paper that laid out the core ideas for a theory of generative art using notions from complexity science as a context (Galanter 2003). The oftenquoted definition is: Generative art refers to any art practice where the artist uses a system, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is set into motion with some degree of autonomy contributing to or resulting in a completed work of art. Beyond this definition a number of additional ideas were outlined. Others are constraint rules that tell the artist what not to do, but again are not sufficient to fix a specific form

Artificial neural networks and deep learning
PROBLEMS IN GENERATIVE ART THEORY
The problem of authorship
The problem of intent
The problem of uniqueness
The problem of authenticity
The problem of dynamics
The problem of postmodernity
The problem of creativity
The problem of meaning
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call