Modeling the interaction of virtual agents in distributed artificial intelligence systems
Modern distributed artificial intelligence (AI) systems utilize a significant number of virtual agents that must work collaboratively to solve complex tasks. However, existing technologies for organizing their interaction are characterized by certain shortcomings: high computational complexity, simplified operating conditions, poor adaptability to changes, and significant problems in accounting for the diversity of virtual agents and their emotional reactions during decision-making. The purpose of the study is to develop a new approach for organizing virtual agent operations in distributed AI systems that aims to improve their cooperation, coordination efficiency, and adaptability. The methodological foundation of the study was an innovative approach that combined a specialized emotion model containing 100 virtual agents in a two-dimensional space with a complex network of connections between them, with machine learning methods to enhance virtual agent coordination. Computer modeling methods were applied using experiments in the Python programming environment. The research results demonstrate that effective communication methods between virtual agents significantly improve their coordination, and conflicts during task execution are substantially reduced through adaptive mechanisms. The innovative emotion model can achieve high accuracy levels and contribute to the formation of new system behavior that includes sharp changes in collective decision-making processes. It also identifies essential parameters of virtual agent cooperation to ensure stable system operation. The comprehensive approach based on combining rule-based logic with machine learning can effectively improve virtual agent coordination, especially under conditions of their diversity. The AI system demonstrates real capacity for large-scale changes, but is imperfect in reflecting negative emotional states. Such AI system research results are essential for developing autonomous systems, intelligent networks, and collaboration platforms for virtual agents.
- Preprint Article
- 10.2196/preprints.78417
- Jun 2, 2025
BACKGROUND Artificial intelligence (AI), particularly large language models (LLMs), is increasingly used in digital health to support patient engagement and behavior change. One novel application is the delivery of motivational interviewing (MI), an evidence-based, patient-centered counseling technique designed to enhance motivation and resolve ambivalence around health behaviors. AI tools, including chatbots and virtual agents, have shown promise in simulating human-like dialogue and applying MI techniques at scale. However, the extent to which AI systems can faithfully replicate MI principles and generate meaningful behavioral outcomes remains unclear. OBJECTIVE This scoping review aimed to assess the scope, characteristics, and findings of existing studies that evaluate AI systems delivering motivational interviewing directly to patients. Specifically, we examined the feasibility of these systems, their fidelity to MI principles, and any reported outcomes related to health behavior change. METHODS We conducted a comprehensive search of five electronic databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for studies published between January 1, 2018, and February 25, 2025. Eligible studies included any empirical design that used AI to perform MI with patients targeting a specific health behavior (e.g., smoking cessation, vaccine uptake). We excluded studies using AI solely for training clinicians in MI. Three independent reviewers conducted screening and data extraction. Extracted variables included study design, AI modality and type, health behavior focus, MI fidelity assessment, and reported outcomes. Data were synthesized narratively to map the evidence landscape. RESULTS Out of 1001 records identified, 8 studies met the inclusion criteria. Most were exploratory feasibility or pilot studies; only one was a randomized controlled trial. AI modalities included rule-based chatbots, large language models (such as GPT-4), and virtual reality conversational agents. Targeted behaviors included smoking cessation, substance use reduction, vaccine hesitancy, type 2 diabetes self-management, and opioid use during pregnancy. Across studies, AI-delivered MI was rated as usable and acceptable. Patients frequently described AI systems as “judgment-free” and supportive, which enhanced openness and engagement, particularly in stigmatized contexts. Expert evaluations of MI fidelity reported high alignment with MI principles in most cases. However, participants also noted a lack of emotional depth and limited perceived empathy. One study improved these perceptions by adjusting conversational pacing and content complexity. Only one study evaluated behavioral outcomes and found no statistically significant changes. CONCLUSIONS AI systems, particularly those powered by LLMs, show promise in delivering motivational interviewing that is scalable, accessible, and perceived as nonjudgmental. While AI can replicate many structural aspects of MI and foster engagement, current evidence on its efficacy in driving behavior change is limited. More rigorous studies, including randomized controlled trials with diverse populations, are needed to assess long-term outcomes and to refine AI-human hybrid models that balance efficiency with relational depth.
- Research Article
1
- 10.1098/rsta.2024.0109
- Nov 13, 2024
- Philosophical transactions. Series A, Mathematical, physical, and engineering sciences
In this article, we identify challenges in the complex interaction between artificial intelligence (AI) systems and society. We argue that AI systems need to be studied in their socio-political context to be able to better appreciate a diverse set of potential outcomes that emerge from long-term feedback between technological development, inequalities and collective decision-making processes. This means that assessing the risks from the deployment of any specific technology presents unique challenges. We propose that risk assessments concerning AI systems should incorporate a complex systems perspective, with adequate models that can represent short- and long-term effects and feedback, along with an emphasis on increasing public engagement and participation in the process.This article is part of the theme issue 'Co-creating the future: participatory cities and digital governance'.
- Research Article
7
- 10.3390/electronics12092069
- Apr 30, 2023
- Electronics
Virtual agents are artificial intelligence systems that can interact with users in virtual reality (VR), providing users with companionship and entertainment. Virtual pets have become the most popular virtual agents due to their many benefits. However, haptic interaction with virtual pets involves two challenges: the rapid construction of various haptic proxies, and the design of agent-initiated active interaction. In this paper, we propose a modular haptic agent (MHA) prototype system, enabling the tactile simulation and encountered-type haptic interaction of common virtual pet agents through a modular design method and a haptic mapping method. Meanwhile, the MHA system with haptic interaction is actively initiated by the agents according to the user’s intention, which makes the virtual agents appear more autonomous and provides a better experience of human–agent interaction. Finally, we conduct three user studies to demonstrate that the MHA system has more advantages in terms of realism, interactivity, attraction, and raising user emotions. Overall, MHA is a system that can build multiple companion agents, provide active interaction and has the potential to quickly build diverse haptic agents for an intelligent and comfortable virtual world.
- Research Article
12
- 10.58496/mjaih/2023/015
- Dec 8, 2023
- Mesopotamian Journal of Artificial Intelligence in Healthcare
Depression is a common and complex mental health condition that affects millions of people in the world. Medical advice, medications, and constant medical supervision by a specialist are common components of traditional treatment methods. Recently, there has been a growing interest in the potential of artificial intelligence to improve the diagnosis, monitoring, and treatment of depression. The potential of artificial intelligence algorithms has been demonstrated in the development of chatbots, or virtual agents, that can provide treatment, assistance, and support to individuals with depression. These artificial intelligence (AI) systems can simulate therapy sessions, offer strategies, monitor progress in treatment phases, and speak in natural language. Artificial intelligence has the potential to play an important role in the early diagnosis and prognosis of depression. By analysing multiple data sets and information such as genetic information, patient medical records, and social media posts using the Internet, artificial intelligence algorithms can identify individuals vulnerable to depression and distinguish them from normal humans. This facilitates the implementation of interventions and preventive measures at the right time and day. AI can also be used to improve depression treatment strategies. By analysing massive databases of patient data, AI systems can determine the ideal drug combinations, doses, amounts, and combinations for each patient. This personalized approach can lead to better treatment outcomes and reduces the trial-and-error process typically required to determine the best action. While AI has the potential to treat psychological depression, it is important to keep in mind that AI should never replace qualified and helpful medical professionals. Artificial intelligence in treating depression seeks to enhance and support the care provided by therapists, psychologists, and psychiatrists, rather than replace human communication and knowledge.
- Research Article
9
- 10.1016/j.jss.2022.111604
- Jan 3, 2023
- Journal of Systems and Software
Artificial intelligence (AI) in its various forms finds more and more its way into complex distributed systems. For instance, it is used locally, as part of a sensor system, on the edge for low-latency high-performance inference, or in the cloud, e.g. for data mining. Modern complex systems, such as connected vehicles, are often part of an Internet of Things (IoT). This poses additional architectural challenges. To manage complexity, architectures are described with architecture frameworks, which are composed of a number of architectural views connected through correspondence rules. Despite some attempts, the definition of a mathematical foundation for architecture frameworks that are suitable for the development of distributed AI systems still requires investigation and study.In this paper, we propose to extend the state of the art on architecture framework by providing a mathematical model for system architectures, which is scalable and supports co-evolution of different aspects for example of an AI system. Based on Design Science Research, this study starts by identifying the challenges with architectural frameworks in a use case of distributed AI systems. Then, we derive from the identified challenges four rules, and we formulate them by exploiting concepts from category theory. We show how compositional thinking can provide rules for the creation and management of architectural frameworks for complex systems, for example distributed systems with AI. The aim of the paper is not to provide viewpoints or architecture models specific to AI systems, but instead to provide guidelines based on a mathematical formulation on how a consistent framework can be built up with existing, or newly created, viewpoints. To put in practice and test the approach, the identified and formulated rules are applied to derive an architectural framework for the EU Horizon 2020 project “Very efficient deep learning in the IoT” (VEDLIoT) in the form of a case study.
- Research Article
6
- 10.60087/jklst.vol2.n2.p384
- Sep 16, 2023
- Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online)
Federated learning has emerged as a promising paradigm in the domain of distributed artificial intelligence (AI) systems, enabling collaborative model training across decentralized devices while preserving data privacy. This paper presents a comprehensive exploration of federated learning architecture, encompassing its design principles, implementation strategies, and the key challenges encountered in distributed AI systems. We delve into the underlying mechanisms of federated learning, discussing its advantages in heterogeneous environments and its potential applications across various domains. Furthermore, we analyse the technical intricacies involved in deploying federated learning systems, including communication efficiency, model aggregation techniques, and security considerations. By synthesizing insights from recent research and practical implementations, this paper offers valuable guidance for researchers and practitioners seeking to leverage federated learning in the development of scalable and privacy-preserving AI solutions.
- Research Article
1
- 10.60087/jaigs.vol03.issue01.p46
- Apr 2, 2024
- Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023
Federated learning stands out as a promising approach within the realm of distributed artificial intelligence (AI) systems, facilitating collaborative model training across decentralized devices while safeguarding data privacy. This study presents a thorough investigation into federated learning architecture, covering its foundational design principles, implementation methodologies, and the significant challenges encountered in distributed AI systems. We delve into the fundamental mechanisms underpinning federated learning, elucidating its merits in diverse environments and its prospective applications across various domains. Additionally, we scrutinize the technical complexities associated with deploying federated learning systems, including considerations such as communication efficiency, model aggregation techniques, and security protocols. By amalgamating insights gleaned from recent research endeavors and practical deployments, this study furnishes valuable guidance for both researchers and practitioners aiming to harness federated learning for the development of scalable and privacy-preserving AI solutions.
- Research Article
- 10.60087/jaigs.v3i1.114
- Apr 2, 2024
- Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023
Federated learning stands out as a promising approach within the realm of distributed artificial intelligence (AI) systems, facilitating collaborative model training across decentralized devices while safeguarding data privacy. This study presents a thorough investigation into federated learning architecture, covering its foundational design principles, implementation methodologies, and the significant challenges encountered in distributed AI systems. We delve into the fundamental mechanisms underpinning federated learning, elucidating its merits in diverse environments and its prospective applications across various domains. Additionally, we scrutinize the technical complexities associated with deploying federated learning systems, including considerations such as communication efficiency, model aggregation techniques, and security protocols. By amalgamating insights gleaned from recent research endeavors and practical deployments, this study furnishes valuable guidance for both researchers and practitioners aiming to harness federated learning for the development of scalable and privacy-preserving AI solutions.
- Book Chapter
- 10.1007/3-540-58266-5_19
- Jan 1, 1994
This paper considers user interaction with Distributed Artificial Intelligence (DAI) systems from the perspective that end users primarily use DAI systems for problem solving and decision making tasks. Initially, human problem solving is considered using the framework provided by Newell and Simon, then pertinent factors from group problem solving are detailed and a classification of user-DAI system interaction is proposed. The role of the user in relation to a variety of systems and to DAI systems in particular is then discussed. Finally a variety of user roles with DAI systems are presented through different scenarios, created from the problem solving characteristics identified earlier. These scenarios are then further detailed with the use of an application from the Electricity Supply Industry. The paper concludes with the identification that the ideal user-DAI interaction platform, is one in which the user exists as a partially integrated entity.
- Research Article
54
- 10.1007/s12525-022-00594-4
- Nov 23, 2022
- Electronic Markets
Artificial intelligence (AI) refers to technologies which support the execution of tasks normally requiring human intelligence (e.g., visual perception, speech recognition, or decision-making). Examples for AI systems are chatbots, robots, or autonomous vehicles, all of which have become an important phenomenon in the economy and society. Determining which AI system to trust and which not to trust is critical, because such systems carry out tasks autonomously and influence human-decision making. This growing importance of trust in AI systems has paralleled another trend: the increasing understanding that user personality is related to trust, thereby affecting the acceptance and adoption of AI systems. We developed a framework of user personality and trust in AI systems which distinguishes universal personality traits (e.g., Big Five), specific personality traits (e.g., propensity to trust), general behavioral tendencies (e.g., trust in a specific AI system), and specific behaviors (e.g., adherence to the recommendation of an AI system in a decision-making context). Based on this framework, we reviewed the scientific literature. We analyzed N = 58 empirical studies published in various scientific disciplines and developed a “big picture” view, revealing significant relationships between personality traits and trust in AI systems. However, our review also shows several unexplored research areas. In particular, it was found that prescriptive knowledge about how to design trustworthy AI systems as a function of user personality lags far behind descriptive knowledge about the use and trust effects of AI systems. Based on these findings, we discuss possible directions for future research, including adaptive systems as focus of future design science research.
- Research Article
16
- 10.1109/mci.2020.3039068
- Feb 1, 2021
- IEEE Computational Intelligence Magazine
Due to the availability of huge amounts of data and processing abilities, current artificial intelligence (AI) systems are effective in solving complex tasks. However, despite the success of AI in different areas, the problem of designing AI systems that can truly mimic human cognitive capabilities such as artificial general intelligence, remains largely open. Consequently, many emerging cross-device AI applications will require a transition from traditional centralized learning systems towards large-scale distributed AI systems that can collaboratively perform multiple complex learning tasks. In this paper, we propose a novel design philosophy called democratized learning (Dem-AI) whose goal is to build large-scale distributed learning systems that rely on the self-organization of distributed learning agents that are wellconnected, but limited in learning capabilities. Correspondingly, inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are selforganized in a hierarchical structure to collectively perform learning tasks more efficiently. As such, the Dem-AI learning system can evolve and regulate itself based on the underlying duality of two processes which we call specialized and generalized processes. In this regard, we present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields. Accordingly, we introduce four underlying mechanisms in the design such as plasticity-stability transition mechanism, self-organizing hierarchical structuring, specialized learning, and generalization. Finally, we establish possible extensions and new challenges for the existing learning approaches to provide better scalable, flexible, and more powerful learning systems with the new setting of Dem-AI.
- Book Chapter
- 10.1007/978-0-387-93808-0_25
- Jan 1, 2010
The application need for distributed artificial intelligence (AI) systems for behavior analysis and prediction is a requirement today versus a luxury of the past. The advent of distributed AI systems with large numbers of sensors and sensor types and unobtainable network bandwidth is also a key driving force. Additionally, the requirement to fuse a large number of sensor types and inputs is required and can now be implemented and automated in the AI hierarchy, and therefore, this will not require human power to observer, fuse, and interpret.
- Research Article
- 10.2196/78417
- Jun 2, 2025
- Journal of medical Internet research
Artificial intelligence (AI) is increasingly used in digital health, particularly through large language models (LLMs), to support patient engagement and behavior change. One novel application is the delivery of motivational interviewing (MI), an evidence-based, patient-centered counseling technique designed to enhance motivation and resolve ambivalence around health behaviors. AI tools, including chatbots, mobile apps, and web-based agents, are being developed to simulate MI techniques at scale. While these innovations are promising, important questions remain about how faithfully AI systems can replicate MI principles or achieve meaningful behavioral impact. This scoping review aimed to summarize existing empirical studies evaluating AI-driven systems that apply MI techniques to support health behavior change. Specifically, we examined the feasibility of these systems; their fidelity to MI principles; and their reported behavioral, psychological, or engagement outcomes. We systematically searched PubMed, Embase, Scopus, Web of Science, and Cochrane Library for empirical studies published between January 1, 2018, and February 25, 2025. Eligible studies involved AI-driven systems using natural language generation, understanding, or computational logic to deliver MI techniques to users targeting a specific health behavior. We excluded studies using AI solely for training clinicians in MI. Three independent reviewers screened and extracted data on study design, AI modality and type, MI components, health behavior focus, MI fidelity assessment, and outcome domains. Of the 1001 records identified, 15 (1.5%) met the inclusion criteria. Of these 15 studies, 6 (40%) were exploratory feasibility or pilot studies, and 3 (20%) were randomized controlled trials. AI modalities included rule-based chatbots (9/15, 60%), LLM-based systems (4/15, 27%), and virtual or mobile agents (2/15, 13%). Targeted behaviors included smoking cessation (6/15, 40%), substance use (3/15, 20%), COVID-19 vaccine hesitancy, type 2 diabetes self-management, stress, mental health service use, and opioid use during pregnancy. Of the 15 studies, 13 (87%) reported positive findings on feasibility or user acceptability, while 6 (40%) assessed MI fidelity using expert review or structured coding, with moderate to high alignment reported. Several studies found that users perceived the AI systems as judgment free, supportive, and easier to engage with than human counselors, particularly in stigmatized contexts. However, limitations in empathy, safety transparency, and emotional nuance were commonly noted. Only 3 (20%) of the 15 studies reported substantially significant behavioral changes. AI systems delivering MI show promise for enhancing patient engagement and scaling behavior change interventions. Early evidence supports their usability and partial fidelity to MI principles, especially in sensitive domains. However, most systems remain in early development, and few have been rigorously tested. Future research should prioritize randomized evaluations; standardized fidelity measures; and safeguards for LLM safety, empathy, and accuracy in health-related dialogue. OSF Registries 10.17605/OSF.IO/G9N7E; https://osf.io/g9n7e.
- Research Article
- 10.55041/isjem03402
- May 7, 2025
- International Scientific Journal of Engineering and Management
Abstract: The escalating mental health crisis worldwide has spurred the examination of alternative technologies to improve access to care, enable early identification of mental health conditions, and provide tailored support. This study investigates the use of Artificial Intelligence (AI), specifically chatbots and Natural Language Processing (NLP), for mental health care. AI-enabled chatbots are increasingly finding their way into therapeutic settings to provide immediate, scalable, and stigma-less support for individuals suffering from psychological distress. These virtual agents can mimic human-like conversations, deliver cognitive behavioural therapy (CBT) strategies, provide continuous mood monitoring, and facilitate evidence-informed interventions. Using highly sophisticated NLP algorithms, AI systems can analyse users' use of language, sentiment, and vocal patterns to detect early signs of mental health-related disorders like depression, anxiety disorders, and posttraumatic stress disorder (PTSD). This paper provides an overview of existing AI-based mental health applications and investigates the effectiveness of AI-enabled chatbots through user engagement in comparison to traditional methods of therapy. Ultimately, this paper examines the ethical concerns around AI in mental health related to privacy of information, and the limitations of machines exhibiting empathy. In addition, the study considers hybrid models made up of human therapists and AI technologies that can improve diagnostic accuracy and therapy benefits. The development of AI technologies in mental health care may dramatically improve barriers to treatment, particularly for those with limited access to mental health care, including those in underserved areas. Overall, the results indicate that artificial intelligence can be a valuable tool in traditional therapy, when developed responsibly and ethically, providing new opportunities for early intervention, ongoing support, and improved access to mental health services. Keywords: mental health; mental health interventions; clinical psychology; artificialintelligence; AI chatbots; chatbot; AI;
- Conference Article
- 10.1109/icips.1997.669380
- Oct 28, 1997
As more challenging applications are automated, cooperative problem solving will be an important paradigm for the next generation of intelligent industrial systems. A key problem with using it in the engineering domain is the development of a structured design method. The authors suggest a design approach for a distributed artificial intelligence (DAI) system based on software engineering, describe the detailed design process of a real DAI system through the example of a simulative transformer substation system (STSS), and present some key problems and techniques of DAI in the engineering domain, such as system modeling, task decomposition and allocation, cooperative mechanism, etc.
- Research Article
- 10.37868/sei.v7i2.id606
- Oct 10, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id539
- Oct 8, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id574
- Oct 8, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id583
- Oct 8, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id526
- Oct 8, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id633
- Sep 25, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id627
- Sep 24, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id614
- Sep 24, 2025
- Sustainable Engineering and Innovation
- Journal Issue
- 10.37868/sei.v8i1
- Sep 24, 2025
- Sustainable Engineering and Innovation
- Research Article
- 10.37868/sei.v7i2.id504
- Sep 19, 2025
- Sustainable Engineering and Innovation
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.