The rise of generative AI has brought with it a surprising paradox: systems that excel at tasks once thought to be uniquely human, like fluent conversation or persuasive writing, while simultaneously failing to meet traditional expectations of computing, in terms of reliability, accuracy, and veracity (e.g., given the various issues with so-called ‘hallucinations’). We argue that, when generative AI is seen through a traditional computing lens, its development focuses on optimizing for traditional computing traits that remain in principle unattainable. This risks backgrounding what is most novel and defining about it. As probabilistic technologies, generative AIs do not store, in any traditional sense, any data or content. Rather, essential features of training data become encoded in deep neural networks as patterns, that become practically available as styles. We discuss what happens when the distinction between objects and their appearance dissolves and all aspects of images or text become understood as styles, accessible for exploration and creative combination and generation. For example, defining visual qualities of entities like ‘chair’ or ‘cat’ become available as ‘chair-ness’ or ‘cat-ness’ for creative image generation. We argue that, when understood as style engines, unique generative AI capabilities become conceptualized as complementing traditional computing ones. This will aid both computing practitioners and information systems researchers in reconciling and integrating generative AI into the traditional IS landscape. Our conceptualization leads us to propose four archetypes of generative AI application and use, and to highlight future avenues for information systems research made visible by this conceptualization, as well as implications for practice and policymaking.
Read full abstract