Abstract

Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by ‘predicting’ responses rather than ‘knowing’ the meaning of their responses. This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations’. When humans use this untruthful content for tasks, it becomes what we call ‘botshit’. This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit. Drawing on risk management research, we introduce a typology framework that orients how chatbots can be used based on two dimensions: response veracity verifiability and response veracity importance. The framework identifies four modes of chatbot work (authenticated, autonomous, automated, and augmented) with a botshit related risk (ignorance, miscalibration, routinization, and black boxing). We describe and illustrate each mode and offer advice to help chatbot users guard against the botshit risks that come with each mode.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call