Abstract

This article proposes the notion of Artificial Sociality to describe communicative AI technologies that create the impression of social behavior. Existing tools that activate Artificial Sociality include, among others, Large Language Models (LLMs) such as ChatGPT, voice assistants, virtual influencers, socialbots and companion chatbots such as Replika. The article highlights three key issues that are likely to shape present and future debates about these technologies, as well as design practices and regulation efforts: the modelling of human sociality that foregrounds it, the problem of deception and the issue of control from the part of the users. Ethical, social and cultural implications are discussed that are likely to shape future applications and regulation efforts for these technologies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.