Abstract
By ingesting a vast corpus of source material, generative deep learning models are capable of encoding multi-modal data into a shared embedding space, producing synthetic outputs which cannot be decomposed into their constituent parts. These models call into question the relation of conceptualisation and production in creative practices spanning musical composition to visual art. Moreover, artificial intelligence as a research program poses deeper questions regarding the very nature of aesthetic categories and their constitution. In this essay I will consider the intelligibility of the art object through the lens of a particular family of machine learning models, known as ‘latent diffusion’, extending an aesthetic theory to complement the image of thought the models (re)present to us. This will lead to a discussion on the semantics of computational states, probing the inferential and referential capacities of said models. Throughout I will endorse a topological view of computation, which will inform the neural turn in computer science, characterised as a shift from the notion of a stored program to that of a cognitive model. Lastly, I will look at the instability of these models by analysing their limitations in terms of compositionality and grounding.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Technophany, A Journal for Philosophy and Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.