Abstract

The discourse surrounding the societal impacts of generative artificial intelligence (GAI), exemplified by technologies like ChatGPT, often oscillates between extremes: utopian visions of unprecedented productivity and dystopian fears of humanity’s demise. This polarized perspective neglects the nuanced, pragmatic manifestation of GAI. In general, extreme views oversimplify the technology itself or its potential to address societal issues. The authors suggest a more balanced analysis, acknowledging that GAI’s impacts will unfold dynamically over time as diverse implementations interact with human stakeholders and contextual factors. While Big Tech firms dominate GAI’s supply, its demand is expected to evolve through experimentation and use cases. The authors argue that GAI’s societal impact depends on identifiable contingencies, emphasizing three broad factors: the balance between automation and augmentation, the congruence of physical and digital realities, and the retention of human bounded rationality. These contingencies represent trade-offs arising from GAI instantiations, shaped by technological advancements, stakeholder dynamics, and contextual factors, including societal responses and regulations. Predicting long-term societal effects remains challenging due to unforeseeable discontinuities in the technology’s trajectory. The authors anticipate a continuous interplay between GAI initiatives, technological advances, learning experiences, and societal responses, with outcomes depending on the above contingencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call