Seeking common ground with a conversational chatbot

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Conversational AI is advancing rapidly, enabling significant improvements in chatbots’ conversational abilities. Currently, available conversational chatbots (e.g., Snapchat’s MyAI) appear to generate fairly realistic, often human-like output. As collaboration between humans and machines becomes more common, and AI systems are increasingly viewed as more than just tools, understanding human communication in such contexts is crucial. Despite the vast array of applications and the increasing number of human-bot interactions, research on how humans interact with conversational chatbots is scarce. One possible reason for this gap is that studying human-computer communication may require adaptations of existing pragmatic frameworks, due to the unique characteristics of these interactions. A key feature of such conversations is their asymmetrical nature. In this paper, we present evidence that the sociocognitive approach (SCA), which takes into account the asymmetry between interlocutors as regards their possible common grounds, has explanatory potential to describe human-AI-powered chatbot interactions. We collected data from thirty-two L1 Hungarian participants interacting with a conversational chatbot on three consecutive days. The turn-by-turn analysis of the 96 conversations provides insights not only into the nature of common ground humans presuppose with a conversational agent, but also into the processes of building emergent common ground over time. Furthermore, we present linguistic evidence that both egocentrism and cooperation play a role in human-chatbot interaction. While the former is manifested in approaching the chatbot as if it were human, the latter appears to play a role in changing strategies that serve common ground seeking and building.

ReferencesShowing 10 of 48 papers
  • Cite Count Icon 188
  • 10.1111/1467-9280.00439
Speakers' overestimation of their effectiveness.
  • May 1, 2002
  • Psychological Science
  • Boaz Keysar + 1 more

  • Cite Count Icon 47
  • 10.1075/ld.2.2.06kec
Is there anyone out there who really is interested in the speaker?
  • Aug 13, 2012
  • Language and Dialogue
  • Istvan Kecskes

  • Open Access Icon
  • Cite Count Icon 26
  • 10.3389/frobt.2020.00046
Trouble and Repair in Child–Robot Interaction: A Study of Complex Interactions With a Robot Tutee in a Primary School Classroom
  • Apr 9, 2020
  • Frontiers in Robotics and AI
  • Sofia Serholt + 3 more

  • Open Access Icon
  • Cite Count Icon 6714
  • 10.1353/lan.1974.0010
A simplest systematics for the organization of turn-taking for conversation
  • Dec 1, 1974
  • Language
  • Harvey Sacks + 2 more

  • 10.1515/9783110766752-004
From laboratory to real life: Obstacles in common ground building
  • Feb 20, 2023
  • Arto Mustajoki

  • Cite Count Icon 37
  • 10.1145/3313831.3376209
A Conversation Analysis of Non-Progress and Coping Strategies with a Banking Task-Oriented Chatbot
  • Apr 21, 2020
  • Chi-Hsun Li + 5 more

  • Open Access Icon
  • Cite Count Icon 1
  • 10.1177/14614456221074085
Avoidance of cognitive efforts as a risk factor in interaction
  • Jun 1, 2022
  • Discourse Studies
  • Arto Mustajoki + 1 more

  • Cite Count Icon 14
  • 10.1515/9783110211474.2.151
A new look at common ground: memory, egocentrism, and joint meaning
  • Aug 19, 2008
  • Herbert L Colston

  • Cite Count Icon 29
  • 10.1075/pbns.270
Designing Speech for a Recipient
  • Nov 15, 2016
  • Kerstin Fischer

  • Cite Count Icon 179
  • 10.1515/ip.2007.004
Communication and miscommunication: The role of egocentric processes
  • Jan 20, 2007
  • Intercultural Pragmatics
  • Boaz Keysar

Similar Papers
  • Book Chapter
  • Cite Count Icon 41
  • 10.1007/978-3-319-01014-4_15
On the Dynamic Relations Between Common Ground and Presupposition
  • Jan 1, 2013
  • Istvan Kecskes + 1 more

The common ground theory of presupposition has been dominant since the seventies (Stalnaker 1974, 1978, 2002). This theory has resulted from a view of communication as transfer between minds. In this view interlocutors presume that speakers speak cooperatively, they infer that they have intentions and beliefs that are necessary to make sense of their speech acts, and treat such entities as pre-existing psychological ones that are later somehow formulated in language. Common ground is considered as a distributed form of mental representation and adopted as a basis on which successful communication is warranted (Arnseth and Solheim 2002; Kecskes and Zhang 2009). However, the theory has not gone without objection and criticism (e.g. Abbott 2008; Beaver and Zeevat 2004; von Fintel 2001, 2006; Simons 2003) because it is based on “an oversimplified picture of conversation” (Abbott 2008), and as a consequence the relationship between common ground and presupposition has also been oversimplified. In this approach presupposition is often considered as a conventional or conversational constraint of common ground, or requirement on common ground that must be satisfied in order to make an appropriate utterance. The problem of accommodation is a critical issue that has been raised against this view, and caused great challenge to the theory by stimulating diverse alternatives. The goal of this paper is to redefine the relationship between common ground and presupposition within the confines of the socio-cognitive approach (SCA). SCA (Kecskes 2008; Kecskes and Zhang 2009; Kecskes 2010a, b) adopted in this paper offers an alternative view on communication, which claims that communication is not an ideal transfer of information, and cooperation and egocentrism (Barr and Keysar 2005; Colston 2005; Keysar 2007), are both present in the process of communication to a varying extent. The SCA emphasizes the dynamics of common ground creation and updating in the actual process of interaction, in which interlocutors are considered as “complete” individuals with different possible cognitive status being less or more cooperative at different stages of the communicative process. Presupposition is a proposal of common ground, and there is a vibrant interaction between the two. They enjoy a cross relation in terms of content and manners in which they are formed, and their dynamism is inherently related and explanatory to each other. This claim has important implications to the solution to presupposition accommodation. After the introduction Sect. 2 describes the socio-cognitive approach. Section 3 reviews the assumed common ground, and Sect. 4 introduces the speaker-assigned presupposition. Section 5 discusses the dynamism of presuppositions and common ground, and claims that their dynamic observations are coherent and explanatory to each other. Section 6 readdresses the accommodation problem with redefinition of the relations.

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/incet51464.2021.9456321
Evaluating the Performance of Various Deep Reinforcement Learning Algorithms for a Conversational Chatbot
  • May 21, 2021
  • R Rajamalli Keerthana + 2 more

Conversational agents are the most popular AI technology in IT trends. Domain specific chatbots are now used by almost every industry in order to upgrade their customer service. The Proposed paper shows the modelling and performance of one such conversational agent created using deep learning. The proposed model utilizes NMT (Neural Machine Translation) from the TensorFlow software libraries. A BiRNN (Bidirectional Recurrent Neural Network) is used in order to process input sentences that contain large number of tokens (20-40 words). In order to understand the context of the input sentence attention model is used along with BiRNN. The conversational models usually have one drawback, that is, they sometimes provide irrelevant answer to the input. This happens quite often in conversational chatbots as the chatbot doesn’t realize that it is answering without context. This drawback is solved in the proposed system using Deep Reinforcement Learning technique. Deep reinforcement Learning follows a reward system that enables the bot to differentiate between right and wrong answers. Deep Reinforcement Learning techniques allows the chatbot to understand the sentiment of the query and reply accordingly. The Deep Reinforcement Learning algorithms used in the proposed system is Q-Learning, Deep Q Neural Network (DQN) and Distributional Reinforcement Learning with Quantile Regression (QR-DQN). The performance of each algorithm is evaluated and compared in this paper in order to find the best DRL algorithm. The dataset used in the proposed system is Cornell Movie-dialogs corpus and CoQA (A Conversational Question Answering Challenge). CoQA is a large dataset that contains data collected from 8000+ conversations in the form of questions and answers. The main goal of the proposed work is to increase the relevancy of the chatbot responses and to increase the perplexity of the conversational chatbot.

  • Research Article
  • 10.1080/0144929x.2025.2541222
Common ground improves learning with conversational agents
  • Aug 8, 2025
  • Behaviour & Information Technology
  • Anita Körner + 4 more

Although conversational agents are successfully applied in teaching, it is largely unclear which communication principles should be employed to optimise learning. We examine the influence of common ground (i.e. shared knowledge on which to build during conversation) on learning. In an in-class experiment, students studied with one of two pedagogical conversational agents. The control version provided information without emphasising grounding, whereas the common ground version emphasised grounding, for example, by encouraging students to monitor and repair common ground. After the learning unit, students evaluated their learning experience and the pedagogical conversational agent, after which they were tested on the studied material. Students in the common ground (vs. the control) condition performed better in a post-study knowledge test and engaged longer with the pedagogical conversational agent. Thus, the common ground emphasis facilitated learning with a conversational agent, indicating that grounding principles should be incorporated when designing conversational agents.

  • Research Article
  • 10.1080/02650487.2025.2558479
Unveiling the human touch: how AI chatbots’ emotional support and human-like profiles reduce psychological reactance to promote user self-disclosure in mental health services
  • Sep 9, 2025
  • International Journal of Advertising
  • Hanyoung Kim + 1 more

In delivering mental health services, the ability of AI chatbots to emulate natural interpersonal interactions with users is of vital importance. However, perceived lack of human-like capabilities and traits in these chatbots may hinder user engagement in such interactions. This study investigated two dimensions of anthropomorphism in human–chatbot interactions: a content-based factor (level of emotional support: high vs. low) and a source-based factor (chatbot profile: human-like vs. machine-like) within the context of mental health interactions. It specifically explored how these factors can encourage users to disclose personal information, a social process traditionally reserved for human-to-human interactions, by reducing psychological reactance that arises during conversations with the chatbot. Results from an experimental study with GPT-4–based AI chatbots revealed that high emotional support from a chatbot, particularly when presented with a human-like profile, effectively reduced psychological reactance. This reduction was associated with increased personal disclosure during interactions, which, in turn, promoted the adoption of adaptive coping strategies for managing stress. These findings offer practical insights for designing emotionally intelligent AI chatbots.

  • Research Article
  • 10.1108/intr-06-2023-0514
Uncovering the mechanisms of common ground in human–agent interaction: review and future directions for conversational agent research
  • Jan 20, 2025
  • Internet Research
  • Antonia Tolzin + 1 more

PurposeHuman–agent interaction (HAI) is increasingly influencing our personal and work lives through the proliferation of conversational agents (CAs) in various domains. As such, these agents combine intuitive natural language interactions by also delivering personalization through artificial intelligence capabilities. However, research on CAs as well as practical failures indicates that CA interaction oftentimes fails miserably. To reduce these failures, this paper introduces the concept of building common ground for more successful HAIs.Design/methodology/approachBased on a systematic literature analysis, we identified 38 articles meeting the eligibility criteria. We critically reviewed this body of knowledge within a formal narrative synthesis structured around the use of common ground in the interaction with CAs.FindingsBased on the systematic review, our analysis reveals five mechanisms for achieving common ground: embodiment, social features, joint action, knowledge base and mental model of conversational agent. We point out the relationships between these mechanisms as they are related to each other in directional and bidirectional ways.Research limitations/implicationsOur findings contribute to theory with several implications for CA research. First, we provide implications about the organization of common ground mechanisms for CAs. Second, we provide insights into the mechanisms and nomological network for achieving common ground when interacting with CAs. Third, we provide a broad research agenda for future CA research that centers around the important topic of common ground for HAI.Originality/valueWe offer novel insights into grounding mechanisms and highlight the potentials when considering common ground in different HAI processes. Consequently, we secure further understanding and deeper insights of possible mechanisms of common ground to shape future HAI processes.

  • Research Article
  • 10.1108/oir-06-2024-0375
“Talk to me, I’m secure”: investigating information disclosure to AI chatbots in the context of privacy calculus
  • Feb 26, 2025
  • Online Information Review
  • Xiaoxiao Meng + 1 more

Purpose This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from previous research that primarily viewed AI chatbots from a non-anthropomorphic approach, this paper contends that AI chatbots are taking on an emotional component for humans. This study thus explores this topic by considering both rational and non-rational perspectives, thereby providing a more comprehensive understanding of user behavior in digital environments. Design/methodology/approach Employing a questionnaire survey (N = 480), this research focuses on young users who regularly engage with AI chatbots. Drawing upon the parasocial interaction theory and privacy calculus theory, the study elucidates the mechanisms governing users’ willingness to disclose information. Findings Findings show that cognitive, emotional and behavioral dimensions all positively influence perceived benefits of using ChatGPT, which in turn enhances privacy disclosure. While cognitive, emotional and behavioral dimensions negatively impact perceived risks, only the emotional and behavioral dimensions significantly affect perceived risk, which in turn negatively influences privacy disclosure. Notably, the cognitive dimension’s lack of significant mediating effect suggests that users’ awareness of privacy risks does not deter disclosure. Instead, emotional factors drive privacy decisions, with users more likely to disclose personal information based on positive experiences and engagement with ChatGPT. This confirms the existence of the privacy paradox. Research limitations/implications This study acknowledges several limitations. While the sample was adequately stratified, the focus was primarily on young users in China. Future research should explore broader demographic groups, including elderly users, to understand how different age groups engage with AI chatbots. Additionally, although the study was conducted within the Chinese context, the findings have broader applicability, highlighting the potential for cross-cultural comparisons. Differences in user attitudes toward AI chatbots may arise due to cultural variations, with East Asian cultures typically exhibiting a more positive attitude toward social AI systems compared to Western cultures. This cultural distinction—rooted in Eastern philosophies such as animism in Shintoism and Buddhism—suggests that East Asians are more likely to anthropomorphize technology, unlike their Western counterparts (Yam et al., 2023; Folk et al., 2023). Practical implications The findings of this study offer valuable insights for developers, policymakers and educators navigating the rapidly evolving landscape of intelligent technologies. First, regarding technology design, the study suggests that AI chatbot developers should not focus solely on functional aspects but also consider emotional and social dimensions in user interactions. By enhancing emotional connection and ensuring transparent privacy communication, developers can significantly improve user experiences (Meng and Dai, 2021). Second, there is a pressing need for comprehensive user education programs. As users tend to prioritize perceived benefits over risks, it is essential to raise awareness about privacy risks while also emphasizing the positive outcomes of responsible information sharing. This can help foster a more informed and balanced approach to user engagement (Vimalkumar et al., 2021). Third, cultural and ethical considerations must be incorporated into AI chatbot design. In collectivist societies like China, users may prioritize emotional satisfaction and societal harmony over privacy concerns (Trepte, 2017; Johnston, 2009). Developers and policymakers should account for these cultural factors when designing AI systems. Furthermore, AI systems should communicate privacy policies clearly to users, addressing potential vulnerabilities and ensuring that users are aware of the extent to which their data may be exposed (Wu et al., 2024). Lastly, as AI chatbots become deeply integrated into daily life, there is a growing need for societal discussions on privacy norms and trust in AI systems. This research prompts a reflection on the evolving relationship between technology and personal privacy, especially in societies where trust is shaped by cultural and emotional factors. Developing frameworks to ensure responsible AI practices while fostering user trust is crucial for the long-term societal integration of AI technologies (Nah et al., 2023). Originality/value The study’s findings not only draw deeper theoretical insights into the role of emotions in generative artificial intelligence (gAI) chatbot engagement, enriching the emotional research orientation and framework concerning chatbots, but they also contribute to the literature on human–computer interaction and technology acceptance within the framework of the privacy calculus theory, providing practical insights for developers, policymakers and educators navigating the evolving landscape of intelligent technologies.

  • Book Chapter
  • Cite Count Icon 26
  • 10.1007/978-3-030-78642-7_53
Privacy Concerns in Chatbot Interactions: When to Trust and When to Worry
  • Jan 1, 2021
  • Rahime Belen Saglam + 2 more

Through advances in their conversational abilities, chatbots have started to request and process an increasing variety of sensitive personal information. The accurate disclosure of sensitive information is essential where it is used to provide advice and support to users in the healthcare and finance sectors. In this study, we explore users' concerns regarding factors associated with the use of sensitive data by chatbot providers. We surveyed a representative sample of 491 British citizens. Our results show that the user concerns focus on deleting personal information and concerns about their data's inappropriate use. We also identified that individuals were concerned about losing control over their data after a conversation with conversational agents. We found no effect from a user's gender or education but did find an effect from the user's age, with those over 45 being more concerned than those under 45. We also considered the factors that engender trust in a chatbot. Our respondents' primary focus was on the chatbot's technical elements, with factors such as the response quality being identified as the most critical factor. We again found no effect from the user's gender or education level; however, when we considered some social factors (e.g. avatars or perceived 'friendliness'), we found those under 45 years old rated these as more important than those over 45. The paper concludes with a discussion of these results within the context of designing inclusive, digital systems that support a wide range of users.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 15
  • 10.1016/j.pragma.2022.03.001
Common ground, cooperation, and recipient design in human-computer interactions
  • Mar 28, 2022
  • Journal of Pragmatics
  • Judit Dombi + 2 more

In recent years, the number of human-machine interactions has increased considerably. Additionally, we have evidence of linguistic differences between human-machine interactions and human–human conversations (e.g., Timpe-Laughlin et al., 2022). Therefore, it is reasonable to revisit theoretical frameworks that conceptualize interactional language use and investigate to what extent they still apply to technology-mediated interactions. As a first attempt at exploring whether pragmatics theories apply to human-machine interaction, we examined how well Kecskés's (2013) socio-cognitive approach (SCA) focusing on asymmetric interactions (e.g., between interlocutors of different language backgrounds) applies to the asymmetry of human-machine interactions.Using examples from experimental data, we present the nature of common ground between human and machine (spoken dialogue system) interlocutors, focusing on the construction of and reliance on the emergent side of common ground that is informed by the actual situational experience. Like Kecskés, we argue that both egocentrism and cooperation play a role in human-machine interaction. While the former is manifested in approaching the machine interlocutor as if it was human, the latter appears to play a role in common ground seeking and building as well as in recipient design. We demonstrate that Kecskés's SCA is a fitting framework for analyzing human-machine communication contexts.

  • Research Article
  • Cite Count Icon 419
  • 10.1016/j.future.2018.01.055
In the shades of the uncanny valley: An experimental study of human–chatbot interaction
  • Feb 6, 2018
  • Future Generation Computer Systems
  • Leon Ciechanowski + 3 more

In the shades of the uncanny valley: An experimental study of human–chatbot interaction

  • Research Article
  • Cite Count Icon 378
  • 10.1080/10447318.2020.1841438
How Should My Chatbot Interact? A Survey on Social Characteristics in Human–Chatbot Interaction Design
  • Nov 8, 2020
  • International Journal of Human–Computer Interaction
  • Ana Paula Chaves + 1 more

Chatbots’ growing popularity has brought new challenges to HCI, having changed the patterns of human interactions with computers. The increasing need to approximate conversational interaction styles raises expectations for chatbots to present social behaviors that are habitual in human–human communication. In this survey, we argue that chatbots should be enriched with social characteristics that cohere with users’ expectations, ultimately avoiding frustration and dissatisfaction. We bring together the literature on disembodied, text-based chatbots to derive a conceptual model of social characteristics for chatbots. We analyzed 56 papers from various domains to understand how social characteristics can benefit human–chatbot interactions and identify the challenges and strategies to designing them. Additionally, we discussed how characteristics may influence one another. Our results provide relevant opportunities to both researchers and designers to advance human–chatbot interactions.

  • Research Article
  • Cite Count Icon 8
  • 10.1515/ip-2021-0003
Common ground and positioning in teacher-student interactions: Second language socialization in EFL classrooms
  • Jan 18, 2021
  • Intercultural Pragmatics
  • Deniz Ortaçtepe Hart + 1 more

This study aims to present how intercultural and intracultural communication unfolds in EFL classrooms with NNESTs and NESTs who constantly negotiate common ground and positionings with their students. Three NEST and three NNEST teaching partners were observed and audio recorded during the first and fifth weeks of a new course they taught in turns. Data were transcribed and analyzed through conversation analysis using Kecskes and Zhang’s socio-cognitive approach to common ground (Kecskes, István & Fenghui Zhang. 2009. Activating, seeking, and creating common ground. A socio-cognitive approach.Pragmatics and Cognition17(2). 331–355) and Davies and Harré’s positioning theory (Davies, Bronwyn and Rom Harré. 1990. Positioning: The discursive production of selves.Journal for the Theory of Social Behaviour20(1). 43–63). The findings revealed several differences in the ways NESTs and NNESTs established common ground and positioned themselves in their social interactions. NESTs’ lack of shared background with their students positioned them as outsiders in a foreign country and enabled them to establish more core common ground (i.e., building new common knowledge between themselves and their students). NNESTs maintained the already existing core common ground with their students (i.e., activating the common knowledge they shared with their students) while positioning themselves as insiders. NESTs’ difference-driven, cultural mediator approach to common ground helped them create meaningful contexts for language socialization through which students not only learned the target language but also the culture. On the other hand, NNESTs adopted a commonality-driven, insider approach that was transmission-of-knowledge oriented, focusing on accomplishing a pedagogical goal rather than language socialization.

  • Conference Article
  • Cite Count Icon 47
  • 10.1109/iccubea47591.2019.9129347
Conversational AI: An Overview of Methodologies, Applications & Future Scope
  • Sep 1, 2019
  • Pradnya Kulkarni + 4 more

Conversational AI is a sub-domain of Artificial Intelligence that deals with speech-based or text-based AI agents that have the capability to simulate and automate conversations and verbal interactions. Conversational AI Agents like chatbots and voice assistants have proliferated due to two main developments. On one hand the methods required to develop highly accurate AI models i.e. Machine Learning, Deep Learning have seen a tremendous amount of advancement due to the increasing research interest in these fields accompanied by the progress in achieving higher computing power with the help of complex hardware architectures like GPUs and TPUs. Secondly, due to the Natural Language interface and the nature of their design, conversational agents have been seen as a natural fit in a wide array of applications like healthcare, customer care, ecommerce and education. This rise in the practical implementation and their demand has in turn made Conversational AI a ripe area for innovation and novel research. Newer and more complex models for the individual core components of a Conversational AI architecture are being introduced at a never before seen rate. This study is intended to shed light on such latest research in Conversational AI architecture development and also to highlight the improvements that these novel innovations have achieved over their traditional counterparts. This paper also provides a comprehensive account of some of the research opportunities in the Conversational AI domain and thus setting up the stage for future research and innovation in this field.

  • Book Chapter
  • 10.1017/9781108884303.005
The Theoretical Framework of Intercultural Pragmatics
  • Oct 20, 2022
  • Istvan Kecskes

The chapter presents the socio-cognitive approach (SCA) to communication that serves as a theoretical frame for intercultural pragmatics. SCA was developed to explain the specific features of intercultural interactions and thus offers an alternative to the Gricean approaches that can be considered monolingual theories. There are two important claims that distinguish SCA from other pragmatic theories. First, SCA emphasizes that cooperation and egocentrism are not antagonistic features of communication. While (social) cooperation is an intention-directed practice that is governed by relevance, (individual) egocentrism is an attention-oriented trait dominated by salience that refers to the relative importance or prominence of information and signs. Second, SCA claims that pragmatic theories have tried to describe the relationship of the individual and social factors by putting too much emphasis on idealized language use, and focusing on cooperation, rapport, and politeness while paying less attention to the untidy, messy, poorly organized and impolite side of communication. SCA pays equal attention to both sides. The first part of the chapter explains the main tenets of SCA. The second part discusses how context, common ground and salience are intertwined in meaning creation and comprehension. The chapter closes with suggestions for future research.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-981-19-2416-3_13
Deep Reinforcement-Based Conversational AI Agent in Healthcare System
  • Jan 1, 2022
  • Pradnya S Kulkarni + 3 more

Conversational AI is a sub-domain of artificial intelligence that deals with speech-based or text-based AI agents that have the capability to simulate and automate conversations and verbal interactions. A Goal Oriented Conversational Agent (GOCA) is a conversational AI agent that attempts to solve a specific problem for the users as per their inputs. The development of Reinforcement Learning algorithms has opened up new opportunities in research related to conversational AI, due to the striking similarity the algorithm bears to the way a conversation takes place. This chapter aims to describe a novel, hybrid conversational AI architecture using Deep Reinforcement Learning that can give state-of-the-art results on the tasks of Intent Classification, Entity Recognition, Dialog Management, State Tracking, Information Retrieval and Natural Language Response Generation. The architecture also consists of external AI modules, focused on carrying out intelligent tasks pertaining to the healthcare sector. The AI tasks that the conversational agent is capable of performing are—Text-based Question Answering, Text Summarization and Visual Question Answering.KeywordsDeep reinforcement learningConversational AI agentBidirectional encoder representations from transformers (BERT) model

  • Research Article
  • Cite Count Icon 7
  • 10.3233/shti190060
An Evolutionary Bootstrapping Development Approach for a Mental Health Conversational Agent.
  • Jan 1, 2019
  • Studies in health technology and informatics
  • Ahmad Kashif + 9 more

Conversational agents are being used to help in the screening, assessment, diagnosis, and treatment of common mental health disorders. In this paper, we propose a bootstrapping approach for the development of a digital mental health conversational agent (i.e., chatbot). We start from a basic rule-based expert system and iteratively move towards a more sophisticated platform composed of specialized chatbots each aiming to assess and pre-diagnose a specific mental health disorder using machine learning and natural language processing techniques. During each iteration, user feedback from psychiatrists and patients are incorporated into the iterative design process. A survival of the fittest approach is also used to assess the continuation or removal of a specialized mental health chatbot in each generational design. We anticipate that our unique and novel approach can be used for the development of conversational mental health agents with the ultimate goal of designing a smart chatbot that delivers evidence-based care and contributes to scaling up services while decreasing the pressure on mental health care providers.

More from: Intercultural Pragmatics
  • Research Article
  • 10.1515/ip-2025-3004
How Ta’ārof works: Ritual politeness and social hierarchy in Persian communication
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Soleiman Ghaderi

  • Research Article
  • 10.1515/ip-2025-3001
Short editorial note
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Alessandro Capone

  • Research Article
  • 10.1515/ip-2025-3007
Miriam A. Locher, Daria Dayter & Thomas C. Messerli: Pragmatics and Translation
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Yuan Ping

  • Research Article
  • 10.1515/ip-2025-3008
Victoria Guillén-Nieto: Hate Speech: Linguistic Perspectives
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Zongyu Huang + 1 more

  • Research Article
  • 10.1515/ip-2025-3006
On bullshit and lies: For a responsibility-based approach
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Irati Zubia Landa

  • Research Article
  • 10.1515/ip-2025-3005
On (in)definite ART in Italian and Italo-Romance varieties
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Giuliana Giusti

  • Research Article
  • 10.1515/ip-2025-3002
Charting the decline of pragmatics in adults with neurodegenerative disorders
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Louise Cummings

  • Research Article
  • 10.1515/ip-2025-frontmatter3
Frontmatter
  • Jun 26, 2025
  • Intercultural Pragmatics

  • Research Article
  • 10.1515/ip-2025-3003
Pope Leo’s first words to the world: A semantic and intercultural perspective
  • Jun 26, 2025
  • Intercultural Pragmatics
  • Anna Wierzbicka

  • Research Article
  • 10.1515/ip-2025-2003
AI, be less ‘stereotypical’: ChatGPT’s speech is conventional but never unique
  • Apr 28, 2025
  • Intercultural Pragmatics
  • Vittorio Tantucci + 1 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon