Abstract

Grice’s Cooperative Principle (1975), which describes the implicit maxims that guide effective conversation, has long been applied to conversations between humans. However, as humans begin to interact with non-human dialogue systems more frequently and in a broader scope, an important question emerges: what principles govern those interactions? In the present study, this question is addressed, as human-AI interactions are categorized using Grice’s four maxims. In doing so, it demonstrates the advantages and shortcomings of such an approach, ultimately demonstrating that humans do, indeed, apply these maxims to interactions with AI, even making explicit references to the AI’s performance through a Gricean lens. Twenty-three participants interacted with an American English-speaking Alexa and rated and discussed their experience with an in-lab researcher. Researchers then reviewed each exchange, identifying those that might relate to Grice’s maxims: Quantity, Quality, Manner, and Relevance. Many instances of explicit user frustration stemmed from violations of Grice’s maxims. Quantity violations were noted for too little but not too much information, while Quality violations were rare, indicating high trust in Alexa’s responses. Manner violations focused on speed and humanness. Relevance violations were the most frequent of all violations, and they appear to be the most frustrating. While the maxims help describe many of the issues participants encountered with Alexa’s responses, other issues do not fit neatly into Grice’s framework. For example, participants were particularly averse to Alexa initiating exchanges or making unsolicited suggestions. To address this gap, we propose the addition of human Priority to describe human-AI interaction. Humans and AIs are not (yet?) conversational equals, and human initiative takes priority. Moreover, we find that Relevance is of particular importance in human-AI interactions and suggest that the application of Grice’s Cooperative Principles to human-AI interactions is beneficial both from an AI development perspective as well as a tool for describing an emerging form of interaction.

Highlights

  • Understanding user perceptions of Artificial Intelligence (AI) systems is notoriously difficult, but it is an important endeavor

  • We argue that Grice’s Cooperative Principle — the maxims of Quantity, Quality, Relevance, and Manner — is a useful framework to describe human-AI interactions, though there are a few important differences from what we expect with human-human interactions

  • Some participants noted and appreciated when Alexa’s response hit the Quantity sweet spot, as seen in Example (1)

Read more

Summary

Introduction

Understanding user perceptions of AI systems is notoriously difficult, but it is an important endeavor. While several approaches have been suggested for doing so (see, e.g., Deriu et al 2019; Schmitt & Ultes 2015), previous attempts are limited in their scope, examining voice user interfaces that are primarily transactional or limited to a particular domain, like setting up travel plans. These limitations become apparent when examining AI voice assistants like Siri, Google Assistant, and Alexa, which feature numerous functions (including naturalistic conversation, in some cases) and can be used in a variety of contexts. It is important to have a well-established, human-based approach with which to understand how humans interpret their own interactions with AI systems

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call