Abstract

AbstractSince the introduction of OpenAI's ChatGPT‐3 in late 2022, conversational chatbots have gained significant popularity. These chatbots are designed to offer a user‐friendly interface for individuals to engage with technology using natural language in their daily interactions. However, these interactions raise user privacy concerns due to the data shared and the potential for misuse in these conversational information exchanges. Furthermore, there are no overarching laws and regulations governing such conversational interfaces in the United States. Thus, there is a need to investigate the user privacy concerns. To understand these concerns in the existing literature, this paper presents a literature review and analysis of 38 papers out of 894 retrieved papers that focus on user privacy concerns arising from interactions with text‐based conversational chatbots through the lens of social informatics. The review indicates that the primary user privacy concern that has consistently been addressed is self‐disclosure. This review contributes to the broader understanding of privacy concerns regarding chatbots the need for further exploration in this domain. As these chatbots continue to evolve, this paper acts as a foundation for future research endeavors and informs potential regulatory frameworks to safeguard user privacy in an increasingly digitized world.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call