Abstract

SummaryGoogle’s LaMDA, Open AI’s GPT‐3, and Meta’s BlenderBot are artificial intelligence (AI)‐based chatbots, that have been trained on billions of documents creating the notion of “massive data.” These systems use human‐generated documents to capture words and relationships between words that people use when they communicate. This paper examines some of the similarities of these systems and the emerging issues regarding these massive data language models, including whether they are sentient, the use and impact of scale, information use and ownership, and explanations of discussions and answers. This paper also directly investigates some artifacts of Google’s LaMDA and compares them with Meta’s BlenderBot. Finally, this paper examines emerging issues and questions deriving from our analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call