Abstract

Measuring the cognitive cost of interpreting the meaning of sentences in a conversation is a complex task, but it is also at the core of Sperber and Wilson's Relevance Theory. In cognitive sciences, the delay between a stimulus and its response is often used as an approximation of the cognitive cost. We have noticed that such a tool had not yet been used to measure the cognitive cost of interpreting the meaning of sentences in a free-flowing and interactive conversation. The following experiment tests the ability to discriminate between sentences with a high cognitive cost and sentences with a low cognitive cost using the response time of the participants during an online conversation in a protocol inspired by the Turing Test. We have used violations of Grice's Cooperative Principle to create conditions in which sentences with a high cognitive cost would be produced. We hypothesized that response times are directly correlated to the cognitive cost required to generate implicatures from a statement. Our results are coherent with the literature in the field and shed some new light on the effect of violations on the humanness of a conversational agent. We show that violations of the maxim of Relation had a particularly important impact on response times and the perceived humanness of a conversation partner. Violations of the first maxim of Quantity and the fourth maxim of Manner had a lesser impact, and only on male participants.

Highlights

  • The recent advances in Artificial Intelligence (AI) have enabled the spread of virtual social agents in many areas, in particular as customer service agents (Chakrabarti and Luger, 2015; Cui et al, 2017; Xu et al, 2017), and as coaches providing help to manage psychological issues like depression or anxiety on a daily basis, like Woebot1 or Tess2

  • We argue that the Turing Test, already well known in computational sciences as a suggested method to test the intelligence of a machine in textual conversations through a comparison with a human, can, be enough to detect flaws in pragmatic processing during such conversations when it is instead seen as a humanness testing environment

  • While the intensity of the effect depends on the study, gender differences when interacting with an artificial conversational partner seem to remain, especially since behavior seems to be different depending on the displayed gender of the artificial agent itself

Read more

Summary

Introduction

The recent advances in Artificial Intelligence (AI) have enabled the spread of virtual social agents in many areas, in particular as customer service agents (Chakrabarti and Luger, 2015; Cui et al, 2017; Xu et al, 2017), and as coaches providing help to manage psychological issues like depression or anxiety on a daily basis, like Woebot or Tess2 These agents often take the shape of chatterbots (or chatbots): they are agents conversing with a user through a textual conversation using, in general, imitations of natural language comprehension and generation. There can be vast differences between what is said and what is meant in conversations between humans as Grice (1975), and later Sperber and Wilson (1995) noted On this distinction between what is said and what is meant, Grice (1975) introduced the Cooperative Principle along with its maxims to describe various expectations that allow conversation partners to infer the meaning of an utterance through the intention of its speaker. The Relevance Theory (Sperber and Wilson, 1995) later updated Grice’s original principles and offered a more in-depth, more unified explanation of the processes involved in inferring what is meant from what is said (and from what is not said)

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call