Abstract

This study pragmatically investigates an artificial intelligence (AI) speaker (AIS)'s verbal communicative performance based on real AI-human conversation data. Specifically, this study explores Grice's conversation theory, which enables the categorization of an AIS's mistaken utterances as violations of specific conversational maxims. Twenty native Korean-speaking participants recorded at least 50 conversations with Kakao Mini AISs, provided by Daum Kakao, Inc., in Korea. Each conversation, either for information sharing or as daily dialogue, was required to contain at least two turn-taking instances. A total of 1,026 recorded dialogues were decomposed into adjacency pairs based on turn-taking. The dialogues were arranged into 3,365 adjacency pairs, and each pair was then classified as a conversational success or failure based on whether the AIS answered the user's utterance appropriately. Language users' evaluations of the AIS's mistaken expressions were also quantified via an additional acceptability rating test with 1,024 adjacency pairs. The overall results indicate that Grice's "maxim of relation" is most frequently flouted by AISs and is considered to be the least natural to language users. These findings suggest that to improve AISs' natural communication capacity, more detailed AI algorithms that generate utterances relevant to either the partner's preceding utterance or a broader conversational context should be created. Although the verbal communicative capacities of the AIS we test are substantially overtaken by those of recent large language models, such as generative pretrained transformer, the pragmatic evaluation described in the current study will remain useful for more precise linguistic quantification of current/future language AI's communicative performance/competence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call