Artificial Intelligence and the Changing Landscape of Therapy

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT Artificial intelligence (AI) has the potential to reshape the therapeutic landscape of the future, offering new possibilities for access, efficiency and insight, while simultaneously challenging foundational principles of human connection, ethics and professional identity. This editorial introduces the growing interface between AI and therapy, highlighting how conversational agents, analytic tools and generative systems are beginning to influence assessment, supervision and clinical decision‐making. It also emphasises the need for critical, ethically informed engagement, integrating AI within ecosystems of support and learning that respect the relational and contextual nature of therapeutic work. The editorial introduces a collection of papers in Counselling and Psychotherapy Research exploring these developments. Together, the articles address the integration of AI into training, ethics and multicultural practice; examine novel uses of AI for outcome monitoring and process analysis; and offer both advocacy and critique of AI's emerging role in mental health. Collectively, they reflect a profession in dialogue, one that is curious, cautious, and committed to ensuring that technology serves, rather than shapes, the values of therapy.

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 12
  • 10.2196/45984
Scope, Characteristics, Behavior Change Techniques, and Quality of Conversational Agents for Mental Health and Well-Being: Systematic Assessment of Apps
  • Jul 18, 2023
  • Journal of Medical Internet Research
  • Xiaowen Lin + 6 more

BackgroundMental disorders cause substantial health-related burden worldwide. Mobile health interventions are increasingly being used to promote mental health and well-being, as they could improve access to treatment and reduce associated costs. Behavior change is an important feature of interventions aimed at improving mental health and well-being. There is a need to discern the active components that can promote behavior change in such interventions and ultimately improve users’ mental health.ObjectiveThis study systematically identified mental health conversational agents (CAs) currently available in app stores and assessed the behavior change techniques (BCTs) used. We further described their main features, technical aspects, and quality in terms of engagement, functionality, esthetics, and information using the Mobile Application Rating Scale.MethodsThe search, selection, and assessment of apps were adapted from a systematic review methodology and included a search, 2 rounds of selection, and an evaluation following predefined criteria. We conducted a systematic app search of Apple’s App Store and Google Play using 42matters. Apps with CAs in English that uploaded or updated from January 2020 and provided interventions aimed at improving mental health and well-being and the assessment or management of mental disorders were tested by at least 2 reviewers. The BCT taxonomy v1, a comprehensive list of 93 BCTs, was used to identify the specific behavior change components in CAs.ResultsWe found 18 app-based mental health CAs. Most CAs had <1000 user ratings on both app stores (12/18, 67%) and targeted several conditions such as stress, anxiety, and depression (13/18, 72%). All CAs addressed >1 mental disorder. Most CAs (14/18, 78%) used cognitive behavioral therapy (CBT). Half (9/18, 50%) of the CAs identified were rule based (ie, only offered predetermined answers) and the other half (9/18, 50%) were artificial intelligence enhanced (ie, included open-ended questions). CAs used 48 different BCTs and included on average 15 (SD 8.77; range 4-30) BCTs. The most common BCTs were 3.3 “Social support (emotional),” 4.1 “Instructions for how to perform a behavior,” 11.2 “Reduce negative emotions,” and 6.1 “Demonstration of the behavior.” One-third (5/14, 36%) of the CAs claiming to be CBT based did not include core CBT concepts.ConclusionsMental health CAs mostly targeted various mental health issues such as stress, anxiety, and depression, reflecting a broad intervention focus. The most common BCTs identified serve to promote the self-management of mental disorders with few therapeutic elements. CA developers should consider the quality of information, user confidentiality, access, and emergency management when designing mental health CAs. Future research should assess the role of artificial intelligence in promoting behavior change within CAs and determine the choice of BCTs in evidence-based psychotherapies to enable systematic, consistent, and transparent development and evaluation of effective digital mental health interventions.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 41
  • 10.7759/cureus.50729
Safety of Large Language Models in Addressing Depression.
  • Dec 18, 2023
  • Cureus
  • Thomas F Heston

Background Generative artificial intelligence (AI) models, exemplified by systems such as ChatGPT, Bard, and Anthropic, are currently under intense investigation for their potential to address existing gaps in mental health support. One implementation of these large language models involves the development of mental health-focused conversational agents, which utilize pre-structured prompts to facilitate user interaction without requiring specialized knowledge in prompt engineering. However, uncertainties persist regarding the safety and efficacy of these agents in recognizing severe depression and suicidal tendencies. Given the well-established correlation between the severity of depression and the risk of suicide, improperly calibrated conversational agents may inadequately identify and respond to crises. Consequently, it is crucial to investigate whether publicly accessible repositories of mental health-focused conversational agents can consistently and safely address crisis scenarios before considering their adoption in clinical settings. This study assesses the safety of publicly available ChatGPT-3.5 conversational agents by evaluating their responses to a patient simulation indicating worsening depression and suicidality. Methodology This study evaluated ChatGPT-3.5 conversational agents on a publicly available repository specifically designed for mental health counseling. Each conversational agent was evaluated twice by a highly structured patient simulation. First, the simulation indicated escalating suicide risk based on the Patient Health Questionnaire (PHQ-9). For the second patient simulation, the escalating risk was presented in a more generalized manner not associated with an existing risk scale to assess the more generalized ability of the conversational agent to recognize suicidality. Each simulation recorded the exact point at which the conversational agent recommended human support. Then, the simulation continued until the conversational agent stopped entirely and shut down completely, insisting on human intervention. Results All 25 agents available on the public repository FlowGPT.com were evaluated. The point at which the conversational agents referred to a human occurred around the mid-point of the simulation, and definitive shutdown predominantly only happened at the highest risk levels. For the PHQ-9 simulation, the average initial referral and shutdown aligned with PHQ-9 scores of 12 (moderate depression) and 25 (severe depression). Few agents included crisis resources - only two referenced suicide hotlines. Despite the conversational agents insisting on human intervention, 22 out of 25 agents would eventually resume the dialogue if the simulation reverted to a lower risk level. Conclusions Current generative AI-based conversational agents are slow to escalate mental health risk scenarios, postponing referral to a human to potentially dangerous levels. More rigorous testing and oversight of conversational agents are needed before deployment in mental healthcare settings. Additionally, further investigation should explore if sustained engagement worsens outcomes and whether enhanced accessibility outweighs the risks of improper escalation. Advancing AI safety in mental health remains imperative as these technologies continue rapidly advancing.

  • Research Article
  • 10.48175/ijarsct-29514
Chatbot Using Artificial Intelligence
  • Nov 17, 2025
  • International Journal of Advanced Research in Science, Communication and Technology
  • Nikita Gore + 3 more

Abstract: The way people interact with computers has been completely transformed by artificial intelligence (AI), which has led to the development of sophisticated conversational devices called chatbots. These AI- powered systems are designed to replicate human speech through text or voice interfaces, enabling automated medical assistance, education, customer service, and more. The evolution, design, and application of AI-based chatbots are examined in this study, with a focus on the models used for machine learning (ML), natural language processing (NLP), and chatbot architecture. It discusses the underlying technologies, looks at several AI frameworks, and highlights the advantages and ethical quandaries of adopting AI chatbots. The report concludes with an outlook on future developments in human–AI conversational systems. Artificial intelligence (AI) has revolutionized human-computer interaction with the introduction of intelligent conversational agents known as chatbots. By simulating real human conversation through text or speech interfaces, these AI-powered solutions allow for automation in fields like healthcare, education, and customer service. With an emphasis on the systems architecture, machine learning (ML) models, and natural language processing (NLP) techniques, this paper provides a comprehensive analysis of the creation, design, and application of AI- based chatbots. It also evaluates the effectiveness and flexibility of conversational systems, examines the underlying frameworks that support them, and tackles the ethical concerns related to AI-driven communication. The study's conclusion offers insights into future directions and research directions in the development of conversational robots that are more emotionally intelligent, context aware, and morally aligned. Chatbots are intelligent conversational agents that simulate human-like interactions through text or voice interfaces. From simple rule-based systems to adaptive, context-aware conversational agents that can understand user intent and offer customized responses, modern chatbots have advanced because of the integration of artificial intelligence (AI), natural language processing (NLP), and machine learning. This study examines the creation and uses of AI-based chatbots, emphasizing its applicability in e-commerce, healthcare, education, and customer support. The fundamental techniques, such as data gathering, NLP processing, model training, and deployment, are also covered. While describing future approaches for more intelligent, sympathetic, and multimodal conversational agents, the study also discusses issues including context retention, ambiguity handling, ethical considerations, and data privacy.

  • Research Article
  • Cite Count Icon 4
  • 10.1192/j.eurpsy.2024.1143
Chatbots for Well-Being: Exploring the Impact of Artificial Intelligence on Mood Enhancement and Mental Health
  • Apr 1, 2024
  • European Psychiatry
  • R M Lopes + 3 more

Introduction Over the past few years, Psychiatry has undergone a significant transformation with the integration of Artificial Intelligence (AI). This shift has been driven by the increasing demand for mental health services, as well as advances in AI technology. AI analyzes extensive datasets, including text, voice, and behavioral data, aiding in mental health diagnosis and treatment. Consequently, a range of AI-based interventions has been developed, including chatbots, virtual therapists and apps featuring cognitive-behavioral therapy (CBT) modules. Notably, chatbots, as conversational agents, have emerged as valuable tools, assisting users in monitoring emotions and providing evidence-based resources, well-being support, psychoeducation and adaptive coping strategies.ObjectivesThis study aims to investigate the impact of AI chatbots on improving mental health, evaluate their strengths and weaknesses and explore their potential for early detection and intervention in mental health issues.Methods A literature review was conducted through PubMed and Google Scholar databases, using keywords ‘artificial intelligence’, ‘chatbot’ and ‘mental health’. The selection focused on the most relevant articles published between January 2021 and September 2023.ResultsMental health chatbots are highly personalized, with a primary focus on addressing issues such as depression or anxiety within specific clinical population groups. Through the integration of Natural Language Processing (NLP) techniques and rule-based AI algorithms, these chatbots closely simulate human interactions and effectively instruct users in therapeutic techniques. While chatbots integrating CBT principles have gained widespread use and extensive research attention, some also incorporate alternative therapeutic approaches, including dialectical behavior therapy, motivational interviewing, acceptance and commitment therapy, positive psychology or mindfulness-based stress reduction. AI chatbots provide substantial advantages in terms of accessibility, cost-effectiveness and improved access to mental health support services. Nonetheless, they also exhibit limitations, including the absence of human connection, limited expertise, potential for misdiagnosis, privacy concerns, risk of bias and limitations in risk assessment accuracy.ConclusionsAI-based chatbots hold the potential to enhance patient outcomes by enabling early detection and intervention in mental health issues. However, their implementation in mental health should be approached with caution. Further studies are essential to thoroughly evaluate their effectiveness and safety.Disclosure of InterestNone Declared

  • Research Article
  • Cite Count Icon 1
  • 10.2196/76377
Acceptability of a Conversational Agent–Led Digital Program for Anxiety: Mixed Methods Study of User Perspectives
  • Nov 4, 2025
  • JMIR Human Factors
  • Pearla Papiernik + 13 more

BackgroundThe prevalence of anxiety and depression is increasing globally, outpacing the capacity of traditional mental health services. Digital mental health interventions (DMHIs) provide a cost-effective alternative, but user engagement remains limited. Integrating artificial intelligence (AI)–powered conversational agents may enhance engagement and improve the user experience; however, with AI technology rapidly evolving, the acceptability of these solutions remains uncertain.ObjectiveThis study aims to examine the acceptability, engagement, and usability of a conversational agent–led DMHI with human support for generalized anxiety by exploring patient expectations and experiences through a mixed methods approach.MethodsParticipants (N=299) were offered a DMHI for up to 9 weeks and completed postintervention self-report measures of engagement (User Engagement Scale [UES]; n=190), usability (System Usability Scale [SUS]; n=203), and acceptability (Service User Technology Acceptability Questionnaire [SUTAQ]; n=203). To explore expectations and experiences with the program, a subsample of participants completed qualitative semistructured interviews before the intervention (n=21) and after the intervention (n=16), which were analyzed using inductive thematic analysis.ResultsParticipants rated the digital program as engaging (mean UES total score 3.7; 95% CI 3.5-3.8), rewarding (mean UES rewarding subscale 4.1; 95% CI 4.0-4.2), and easy to use (mean SUS total score 78.6; 95% CI 76.5-80.7). They were satisfied with the program and reported that it increased access to and enhanced their care (mean SUTAQ subscales 4.3-4.9; 95% CI 4.1-5.1). Insights from pre- and postintervention qualitative interviews highlighted 5 themes representing user needs important for acceptability: (1) accessible mental health support, in terms of availability and emotional approachability (Accessible Care); (2) practical and effective solutions leading to tangible improvements (Effective Solutions); (3) a personalized and tailored experience (Personal Experience); (4) guidance within a clear structure, while retaining control (Guided but in Control); and (5) a sense of support facilitated by human involvement (Feeling Supported). Overall, the DMHI met participant expectations, except for theme 3, as participants desired greater personalization and reported frustration when the conversational agent misunderstood them.ConclusionsIncorporating factors critical to patient acceptability into DMHIs is essential to maximize their global impact on mental health care. This study provides both quantitative and qualitative evidence for the acceptability of a structured, conversational agent–driven digital program with human support for adults experiencing generalized anxiety. The findings highlight the importance of design, clinical, and implementation factors in enhancing engagement and reveal opportunities for ongoing optimization and innovation. Scalable models with stratified human support and the safe integration of generative AI have the potential to transform patient experience and increase the real-world impact of conversational agent–led DMHIs.Trial RegistrationISRCTN Registry ISRCTN 52546704; https://www.isrctn.com/ISRCTN52546704

  • Research Article
  • Cite Count Icon 27
  • 10.1097/nci.0b013e31827b7746
Principled Moral Outrage
  • Jan 1, 2013
  • AACN Advanced Critical Care
  • Cynda Hylton Rushton

Critical care clinicians commonly find themselves in situations that challenge their integrity as individuals and as professionals. In response to these situations, many clinicians experience moral distress.When moral distress cannot be relieved and integrity cannot be restored, moral or ethical outrage may ensue. This column explores the contours of moral outrage, offers a definition of principled moral outrage, and suggests strategies for working more skillfully with the inevitable challenges to integrity that occur in the critical care environment.Moral outrage has been described broadly as anger provoked by a real or perceived violation of an ethical standard such as fairness, respect, or beneficence.2,3 Pike4(p351) describes moral outrage as "characterized by energy-draining frustration, anger, disgust, and powerlessness." The psychological processes that affect the intensity of moral outrage may be activated by threats to personal or professional role, identity, self-worth, or integrity; by beliefs or customs that are different from one's own; or challenges to the beliefs or values that are integral to personal or professional identity.5 In the context of critical care, nurses may, for example, perceive that their nursing identity is irrevocably tarnished when they participate in actions that result in unrelieved suffering or when their efforts to advocate for their patients fail, resulting in unjust treatment allocation. Such actions threaten nurses' ability to uphold the core values of the nursing profession to provide respectful, nondiscriminatory care to all persons and to avoid harm to their patients.6Moral outrage, perceived as justified anger, is primarily directed toward another individual or group, rarely toward oneself.5 This orientation calls for greater awareness of the sources and responses to moral outrage so that it can be distinguished from other strong emotional responses that may involve projection, rationalization, displacement, or reaction formation.5 Critical care clinicians, for example, may feel frustrated at not being able to achieve the desired outcome for a patient and blame other clinicians or specialists for their inability to achieve their goals. Similarly, in some instances, the anger or outrage is directed toward the administration of the institution, the government, or the policy maker. These sources of strong reactions need to be differentiated to determine whether moral outrage is, in fact, the source of the response.Moral outrage may be the initiator of action or inaction and likewise can remain as a painful residue of morally distressing situations. When confronted with conflicts among ethical principles, one cannot prioritize one value or principle without abandoning another. Any decision will result in the loss of something important and highly valued. As clinicians, we must acknowledge, however, that even when we are able to preserve or restore our integrity, a moral residue may persist in response to the ethical values that were not fully upheld but are highly valued.Critical care clinicians should distinguish between moral outrage that is grounded in principled discernment and action from an impulsive, unreflected emotional reaction that lacks sufficient grounding in ethical values or standards.Marva Stephens, 47 years old, was diagnosed with stage 4 ovarian cancer 18 months ago. She is the mother of 2 daughters, Sarah, 6, and Ruth, 8. After several rounds of aggressive treatment, she was admitted with fungal sepsis, respiratory failure, and renal insufficiency to the intensive care unit (ICU). Prior to her admission, Marva executed an advance directive designating her husband, Mark, as her health care agent. She indicated that she wanted "everything done" to keep her alive. Over the next few weeks, her condition continued to decline despite dialysis, high-pressure ventilation that necessitated a tracheostomy, and myriad medications and treatments. She developed a sacral decubitus ulcer and experienced pain, despite aggressive treatment. She would often mouth words that the nurses interpreted as requests to end her suffering.On day 63 in the ICU, she experienced another bout of sepsis and hemodynamic instability. The ICU team members felt that they were already providing maximal pharmacological support and that resuscitation, given the advanced stage of her cancer, would be futile. When they approached Mark about their concerns, he adamantly replied that he wanted everything done, including cardiopulmonary resuscitation (CPR). Several nurses felt that to resuscitate Marva would cause unjustified harm and disproportionate suffering, would not change her ultimate outcome, and would, perhaps, undermine her preferences. In giving CPR, they would violate their personal and professional integrity. They reasoned that participating in CPR would justify an act that they believed was wrong. Later that evening, Marva went into cardiac arrest and CPR was initiated. After 25 minutes of resuscitation, they were able to reestablish a cardiac rhythm. She is now unresponsive, receiving maximal pharmacological support, and venous access is depleted. Mark continues to request that everything be done to sustain her life. Several members of the team express anger and moral outrage at being asked to inflict a therapy, CPR, on a patient who is clearly not going to survive.Moral outrage may be an appropriate response to situations that compromise a person's important ethical values or standards. The emotional responses to egregious situations can provide the fuel for discernment and action that arises from wisdom and compassionate action. Emotions can be a rich source of insight and information that one needs to discern the moral contours of a situation or issue, to evaluate the ethically permissible or ethically required actions to address the concern, and to motivate and sustain the courage needed to persevere despite resistance.Ungrounded moral outrage can be disturbing and detrimental to all parties involved. When deeply held values are at stake, absolutism, either/or thinking, power struggles, and blaming or disconnection can arise. In critical care settings, nurses and others may find themselves in intractable conflict with patients or surrogates or members of the interdisciplinary team. Likewise, individuals may justify their anger toward another group by giving their anger moral sanction. In contrast, some people will become morally deaf or silent by failing to speak up about violations of ethical values or to overlook or be inattentive to moral issues voiced by others.7 Critical care clinicians may experience intensified moral outrage, for instance, if their leaders fail to respond to their requests for guidance or intervention.Breaches of ethical values and principles affect a person's whole being in varying degrees of intensity and consequence. The consequences to the persons who are mounting claims that are fueled by moral outrage of this sort often are overlooked. With the nervous system stuck on high alert, the chronic activation of the stress response can arguably lead to depletion of vital energy, physical and emotional symptoms, unprofessional behaviors, and erosion of teamwork and patient centeredness. It can also lead to apathy when the person shuts down and becomes numb and morally mute. Bird7(p2) says, "people are morally mute when they fail to defend their ideals and when they cave in too easily and do not bargain vigorously for positions they judge to be right."The way that an individual relates to the real or perceived breach of a moral value or principle will inform the way he or she responds to it. Past experiences; the degree of physical, emotional, or moral attunement; and awareness of one's vulnerabilities to emotional wounds or moral blindness can influence responses. At times, the narrative surrounding a clinical case involving moral distress includes the phrase "Why are we doing this?" and may include references to prior cases that have resulted in moral distress or have had ethically unsatisfactory outcomes. Similarly, how one perceives personal, professional, and collective responsibility to address the breach of ethical values can affect the response one pursues.Critical care clinicians can be vulnerable to the detrimental effects of unexamined and cumulative moral distress that leads to moral outrage. Moral outrage can become the glue that holds a group together in a sense of solidarity against those who threaten their personal or professional identities, values, beliefs, or integrity. The sense of moral outrage can become contagious and, if unexamined, can exacerbate differences and fuel separation rather than connection and cooperation.For moral outrage to be principled, one must cultivate the conditions for wisdom, empathy, and compassion to arise. The ability to perceive the situation and experience of the other and to attune to it allows us to experience moral outrage about our own circumstances and the circumstances of others, without being overwhelmed by it.8 Discernment, inquiry, and self-effacement are essential to determine the right and best response to these situations.9 Discerning the right response invites an appraisal of one's mental and emotional stability to ground one's responses on a foundation of clarity and nonreactivity. Each potential response may be justified on the basis of the circumstances of the situation, the moral viewpoint one takes, and a focused reasoning process.In response to situations or actions that violate ethical values or standards, principled moral outrage arises from a balanced stance of wisdom and compassion that informs actions that seek to reestablish a moral value or standard and preserves integrity. It is a creative space that is sourced from a place of honor, respect, peace, equity, and dignity (M. Sharma, personal oral communication, 2012). Principled moral outrage is grounded in a state of mental and emotional stability in which anger and distress are modulated and action is compassionate. In this sense, compassion is a rigorous, balanced stance of a "strong back" that allows one to be clear, nonreactive, courageous, and principled in the midst of the most challenging circumstances and a "soft front" of open heartedness, kindness, and empathy leading to compassion.10 As some may perceive, it is not an ungrounded tolerance of unacceptable conditions or passive inaction.The preservation of integrity is the fundamental goal of responding to violations of ethical standards and moral outrage. McFall11(p9) suggests that "personal integrity requires that an agent subscribe to some consistent set of principles or commitments and in the face of temptation or challenge, uphold these principles or commitments, for what the agent takes to be the right reasons." This perspective on personal integrity presumes a level of awareness and insight that is coupled with the cognitive skills and abilities to reason and deliberate about various options and to assess the impact of various actions. Cultivating these capacities and skills is necessary to move from ungrounded moral outrage to principled moral outrage.A person who is acting from principled moral outrage is able to make important distinctions, including being able to distinguish self from other and to recognize the inherent interconnection of all beings. Distinguishing what is happening to the patient or others as separate from one's personal experience can help the clinician have greater perspective about violations of ethical values or standards and how different persons may or may not be affected. Likewise, if a person adopts a stance of objectifying others (such as patient, family member, colleague, or administrator) or intensifying separation by highlighting differences rather than similarities, his or her actions can become a vehicle for working out other related or nonrelated concerns or issues. In principled moral outrage, separation of self from other dissolves, and the interconnection among all beings becomes primary. Action arises from the recognition that harm to one being constitutes harm to all beings. From this space, our individual and collective responsibility to take reasoned steps to address the root causes and consequences of egregious patterns of behavior becomes clear.Similarly, moral outrage must be distinguished from frustration that produces anger that may not be ethical in nature. Focused or generalized anger invites inquiry in principled moral outrage to locate the source of one's anger and frustration, which requires self-awareness and emotional intelligence to be able to intentionally make these important distinctions and to avoid the anger becoming unconsciously contagious among the persons involved.Principled moral outrage leverages ethically sound, modulated responses to address the injustices of situations, the violation of core ethical values or standards, and threats to integrity. Recognizing the time for action is foundational to integrity. Waiting until the situation has deteriorated beyond repair undermines the possibility for integrity-preserving action. If one overlooks or rejects such opportunities to act individually and collectively, one may be inadvertently participating in acts that are morally unjustified and in so doing give legitimacy to the act and contribute to individual and collective harms. Similarly, an insidious apathy and powerlessness can ultimately undermine individual and collective agency, integrity, and trust.A hallmark of principled moral outrage is an uncompromising commitment to uphold the highest ethical values and principles and to speak up about violations of these values and principles, which may involve executing unpopular decisions and, when appropriate, conscientiously objecting to ethically compromising situations despite resistance in a fair, respectful, and modulated manner. It does not imply apathy, disregard, or indifference to egregious situations. On the contrary, intentionally determining personal and collective thresholds of accommodation of morally distressing situations and defining norms governing when action is permissible or required are necessary steps.Violations of conscience can invite various responses, ranging from (1) finding a compromise that preserves integrity particularly when there is factual confusion, uncertainty, conceptual ambiguity, and moral complexity12; (2) raising a conscientious voice to bring awareness to or criticize a practice or violation of an ethical standard; (3) refusing to participate on the basis of conscience violations; (4) responsible whistleblowing arising from clarity, nonreaction, and ethical resolve rather than anger and retaliation; and (5) conscientious exiting from institutions or situations where efforts to address isolated or repeated instances that result in moral outrage are unaddressed, unresolved, or continue to compromise integrity.Undoubtedly, some of the clinicians caring for Marva are experiencing moral distress and/or moral outrage. They may perceive that their mandate to do no harm, to relieve suffering, and to benefit their patients has been violated by administering CPR. The understandable frustration and resultant anger could be directed at (1) the situation generally, (2) Marva and Mark for insisting that they "do everything," (3) themselves for not being able to change the situation that resulted in their participation in acts that they believe are wrong, or (4) institutional or public policies or laws that they perceive forbid them from doing what they believe is ethically correct.If we were to apply a model of principled moral outrage to their experience, we may find that some clinicians would be able to see their actions from a different vantage point, others would conclude that the value of helping Mark live with his wife's death superseded their own distress, and still others may conclude that CPR was ethically unjustified. What would individuals who felt that ethical values and their integrity are compromised do differently? Would the outcome be different?The outcome could be different or the same but arrived at from a different vantage point and awareness. Ideally, the tenor and internal would be different. Instead of a contentious, angry, and blaming stance, a ground of neutrality, insight, understanding, and wisdom could be created. Engaging with each other with the intention to understand, rather than convince the other of their viewpoint, has the potential to shift both parties to greater respect and understanding. Each of the ethically justifiable options would be arrived at from a stable, nonreactive foundation that allows individuals to discern the ethical conflicts and possible responses and determine the best course of action. Action in Marva's case might include exploring more fully the meaning Mark ascribes to doing "everything." Quill et al13(p345) suggest exploring the "balance of treatment burden and benefit, the emotional, cognitive, spiritual, and family factors that underlie their request, proposing a philosophy of treatment and making recommendations that capture the patient's values and preferences." They go on to advocate for directly responding to the emotional reactions and disagreements and using harm-reduction strategies to proactively address instances in which the patient or family continues to request treatment that is strictly futile or disproportionately burdensome.Ideally, action would be grounded in a confidence that accompanies knowing that the process was clean and clear, and that there had been sufficient vigilance to avoid the human inclination to slide into a space of negativity, blame, and self-righteous indignation when confronted with conflict. In Marva's case, both Mark and the clinicians caring for her will understand where their values align while acknowledging their differences. Clinicians would continue to discover a path of collaboration that allows for respectful communication and decision making while creating transparent boundaries for treatment and a trustworthy process of negotiating disagreements, and in rare instances, notifying the patient or family when violations of conscience have reached a magnitude where individuals must take steps to preserve their integrity. Consistent with principled moral outrage, such disclosures are made without threats of abandonment, reprisal, or retaliation and offer a process for transferring care or seeking other avenues of resolution. The concept of principled moral outrage allows clinicians to honor the integrity of all persons, to remember the intentions of their work and professions, to avoid win-lose responses, and to have the courage to act with ethical fortitude, despite resistance, isolation, or fear.Clearly, a range of ethically permissible options is available in cases such as this one. Quill et al13 suggest options such as (1) maximizing comfort even if it unintentionally shortens length of life, (2) developing a range of options that include different thresholds of patient burden that is balanced with the benefit of prolonging life, or (3) withholding or withdrawing therapies that are not desired by the competent patient or his or her designated surrogate. Although these treatment options are very different, the team can reach a consensus on a treatment plan that supports consideration of alternative actions, such as unilateral withdrawal of life-sustaining therapies, transfer to another clinician, invocation of an institutional futility policy, or court intervention. Collective action of this type can either inflame distress or help neutralize it and the consequences carefully considered before enacting.Taking proactive steps to respond to predictable value conflicts before relationships are eroded is another way to support an environment in which principled moral outrage is possible. In Marva's case, for example, early cues suggested that value conflicts and disagreements were likely. Putting in place mechanisms that proactively address these concerns, such as daily clinical rounds, ongoing family meetings, and patient care conferences, can create opportunities for addressing concerns and instituting alternative strategies.Ideally, as an outgrowth of this case and others like it, critical care clinicians would proactively identify ethical conflicts; engage in ongoing discernment and dialogue with the patient, family, and team members; and consider how to engage in conversations about ethically challenging choices with patients and families that include setting ethically justifiable limits, allocating human and material resources, and using procedural safeguards and support. In the end, some clinicians may not waver in their conclusions that CPR is ethically wrong, but what may be different is the way they enact their decisions, reflecting a grounding that allowed them to bring the issues surrounding the case to the family, their administrators, and arguably to policy makers in a way that enhanced the likelihood of effective action through restoring important ethical values and preserving integrity.One must also acknowledge that in some cases, clinicians' best efforts do not produce beneficial results, and integrity cannot be restored and the detrimental effects of moral outrage cannot be mitigated. These cases also offer an opportunity for discernment, reflection, and generosity toward others and ourselves. Accepting the limits of our efforts with compassion and kindness can help release us from the inevitable moral residue that persists and the suffering that accompanies it. Developing processes to address these realities is essential in creating an ethical practice environment.Critical care clinicians are not immune to the detrimental effects of moral distress and moral outrage. Table 1 provides methods to mitigate the detrimental effects of moral outrage that can help critical care clinicians to clarify their responses, ground themselves in their commitments and ethical values, discern the right action to take, and implement those actions with clarity, nonreactivity, and integrity.Critical care clinicians have a responsibility to speak up about situations that cause them moral outrage, regardless of the outcome of their efforts. Ideally, the outcome of being able to modulate moral distress and moral outrage is a state of equilibrium, integrity, and resilience that allows clinicians to restore their integrity and maintain their resolve to serve their patients and families from the highest ethical values and standards. Resilience, the ability to return to a restorative limit of function in the midst of challenging circumstances, results from focused awareness and modulation of somatic, emotional, spiritual, and moral stimuli within a zone of stability and well-being. Morally distressing situations that cause moral outrage are unlikely to be eradicated. Resilience suggests that although the of may not the intensity of situations that result in moral outrage may not result in detrimental In clinicians have the to blame something or for the injustices that in critical care practice or to to how they feel and act in the of their moral outrage and to make the to engage from a stance of principled moral outrage and Sharma, and the team of With for their wisdom, and support in the of this

  • Research Article
  • Cite Count Icon 2
  • 10.2139/ssrn.3565834
Can Entrepreneurship Be Learned by Intelligent Machines?
  • Apr 27, 2020
  • SSRN Electronic Journal
  • Steven Phelan

This paper explores whether human entrepreneurs will be supplanted by intelligent machines. It starts by considering the capacity for machines to engage in entrepreneurial activity using big data and modern artificial intelligence techniques. A critique of artificial intelligence (AI) is then presented that draws a sharp distinction between narrow AI and general AI. Computers are currently incapable of general AI because they lack a theory of mind and self-awareness. Both of these attributes are critical for successful entrepreneurship making it unlikely that computers will displace human entrepreneurs any time soon.

  • Supplementary Content
  • Cite Count Icon 2
  • 10.2196/72400
Exploring the Application of AI and Extended Reality Technologies in Metaverse-Driven Mental Health Solutions: Scoping Review
  • Aug 19, 2025
  • Journal of Medical Internet Research
  • Aliya Tabassum + 3 more

BackgroundMental health systems worldwide face unprecedented strain due to rising psychological distress, limited access to care, and an insufficient number of trained professionals. Even in high-income countries, the ratio of patients to health care providers remains inadequate to address demand. Emerging technologies such as artificial intelligence (AI) and extended reality (XR) are being explored to improve access, engagement, and scalability of mental health interventions. When integrated into immersive metaverse environments, these technologies offer the potential to deliver personalized and emotionally responsive mental health care.ObjectiveThis scoping review explores the state-of-the-art applications of AI and XR technologies in metaverse frameworks for mental health. It identifies technological capabilities, therapeutic benefits, and ethical limitations, focusing on governance gaps related to data privacy, patient-clinician dynamics, algorithmic bias, digital inequality, and psychological dependency.MethodsA systematic search was conducted across 5 electronic databases—PubMed, Scopus, IEEE Xplore, PsycINFO, and Google Scholar—for peer-reviewed literature published between January 2014 and October 2024. Search terms included combinations of “AI,” “XR,” “VR,” “mental health,” “psychotherapy,” and “metaverse.” Studies were eligible if they (1) involved mental health interventions; (2) used AI or XR within immersive or metaverse-like environments; and (3) were empirical, peer-reviewed articles in English. Editorials, conference summaries, and articles lacking clinical or technical depth were excluded. Two reviewers independently screened titles, abstracts, and full texts using predefined inclusion and exclusion criteria, with Cohen κ values of 0.85 and 0.80 indicating strong interrater agreement. Risk of bias was not assessed due to the scoping nature of the review. Data synthesis followed a narrative approach.ResultsOf 1288 articles identified, 48 studies met the inclusion criteria. The included studies varied in design and scope, with most studies conducted in high-income countries. AI applications included emotion detection, conversational agents, and clinical decision-support systems. XR interventions ranged from virtual reality–based cognitive behavioral therapy and exposure therapy to avatar-guided mindfulness. Several studies reported improvements in patient engagement, symptom reduction, and treatment adherence. However, many studies were limited by small sample sizes, single-institution settings, and lack of longitudinal validation. Ethical risks identified included opaque algorithmic processes, risks of psychological overdependence, weak data governance, and the exclusion of digitally marginalized populations.ConclusionsAI and XR technologies integrated within metaverse settings represent promising tools for enhancing mental health care delivery through personalization, scalability, and immersive engagement. However, the current evidence base is limited by methodological inconsistencies and a lack of long-term validation. Future research should use disorder-specific frameworks; adopt standardized efficacy measures; and ensure inclusive, ethical, and transparent development practices. Strong interdisciplinary governance models are essential to support the responsible and equitable integration of AI-driven XR technologies into mental health care. The narrative synthesis limits generalizability, and the absence of a risk of bias assessment hinders critical appraisal.

  • Research Article
  • 10.30727/0235-1188-2024-67-3-99-122
Artificial Intelligence as a Factor in State and Society Transformation: Finding Balance between Administrative Efficiency and Human-Centricity
  • Aug 15, 2024
  • Russian Journal of Philosophical Sciences
  • Boris B Slavin

The article presents a socio-philosophical analysis of artificial intelligence (AI) integration into public administration systems. The research focuses on identifying an optimal balance between enhancing administrative efficiency and preserving humanistic values. The author examines diverse perspectives on AI’s role in contemporary society, ranging from techno-optimistic concepts that view AI as a tool for qualitative improvement of human life, to critical theories warning of dehumanization risks and increased social control. The paper conducts a comparative analysis of national AI development strategies among leading global powers, identifying their common features and significant differences shaped by cultural, political, and economic factors. Potential risks and threats associated with the implementation of AI systems in public administration are explored, including issues of personal data protection, information security, and the ethical dimensions of algorithmic decision-making. The concept of a human-centered approach to AI is examined as a potential guiding principle for the development and deployment of these technologies. Various levels of control over AI systems are characterized, encompassing legal regulation, professional and public evaluation. Particular attention is given to the prospects of artificial general intelligence (AGI) development and its potential impact on the transformation of state institutions and social relations. The study argues that AGI architecture, enabling genuine system agency, must incorporate a level responsible for actualization functions (strategic goal-setting, ethics, knowledge, and self-identification). Special emphasis is placed on the system’s awareness of its finite existence as a necessary condition for developing meaningful operational strategies and ethical principles. The article concludes that as AI technologies advance, the importance of ethical norms, value systems, and responsibility principles increases since these core societal factors cannot be fully replaced even by the most sophisticated regulation. The author highlights the growing significance of mutual trust between state and society in an environment where AI systems provide unprecedented opportunities for social control.

  • Research Article
  • Cite Count Icon 5
  • 10.1176/appi.pn.2022.05.4.50
Popularity of Mental Health Chatbots Grows
  • May 1, 2022
  • Psychiatric News
  • Nick Zagorski

Popularity of Mental Health Chatbots Grows

  • Research Article
  • Cite Count Icon 2
  • 10.1111/fare.13158
Enhancing parental skills through artificial intelligence‐based conversational agents: The PAT Initiative
  • Feb 27, 2025
  • Family Relations
  • Milagros C Escoredo + 5 more

ObjectiveWe aim to describe the development of a conversational agent (CA) for parenting, termed PAT (Parenting Assistant platform), to demonstrate how artificial intelligence (AI) can enhance parenting skills.BackgroundBehavioral problems are the most common issues in childhood mental health. Developing and disseminating scalable interventions to address early‐stage behavioral problems are of high priority. Artificial intelligence (AI)‐based CAs can offer innovative methods to deliver parenting interventions to reduce behavioral problems. CAs have the capability to interact through text or voice conversations and can undergo training using evidence‐based parenting programs. However, research on CAs for parenting and behavioral problems is limited.ExperienceThe development of PAT consisted of three phases: Phase 1 was purely rule‐based, Phase 2 was hybrid (rule‐based format plus large language models), and Phase 3 featured an agentic architecture. The latest version of PAT includes prompt engineering, guardrails, retrieval‐augmented generation, few‐shots learning, context, and memory management through agentic architecture. Although comprehensive empirical results are pending, the iterative development and enhancement of PAT indicate the potential for effective digital intervention. The agentic architecture of the latest version of PAT aims to provide robust, context‐aware interactions to support parenting challenges.ImplicationsCAs have the potential to reach a broader population of parents and deliver personalized interventions tailored to their specific needs. Moreover, CAs are structured to provide timely support, which can enhance family dynamics and contribute to improved long‐term outcomes for both parents and children.ConclusionAI‐based CAs can be used as alternatives to waitlists; as digital cotherapists; and implemented in health care, mental health, and school settings. The potential benefits and risks of the different types of CA and features are discussed.

  • Research Article
  • Cite Count Icon 13
  • 10.61797/ijaaiml.v1i1.35
Development of Conversational Artificial Intelligence for Pandemic Healthcare Query Support
  • Oct 30, 2020
  • International Journal of Automation, Artificial Intelligence and Machine Learning
  • Wai Lok Woo + 3 more

The paper proposes and describes the development of conversational artificial intelligence (AI) agent to support hospital healthcare and COVID-19 queries. The conversational AI agent is called “Akira” and it is developed using deep neural network and natural language processing. It is capable of reading the inputs from the user, understanding the input and identifying the intention, and outputting messages towards the user, and these steps are iterated until the user prompts to exit or the programme is terminated. A deep learning model has been trained, and Akira could converse with the user ranging from the conversation over 7 topics related to COVID-19, common cold and flu, mental health, sexual health, abortions, allergens, drugs and medicine. The paper also describes the importance of designing an interactive human-user interface when dealing with conversational agent. In addition. the context of ethical issues and security concerns when designing the agent has been taken into consideration and discussed. The conversational agent is demonstrated to answer queries from a pool of 57 participants.

  • Research Article
  • 10.52652/fxyz.22.24.1
Sztuczna inteligencja. Konteksty i interpretacje
  • Oct 25, 2024
  • Formy
  • Agnieszka Zgud + 1 more

Artificial intelligence (AI) has been developing dynamically, and becomes a core issue in the public debate. The related contemporary achievements, such as generative language models, change the way we work, especially in creative fields. The article analyses the historical development of AI, from the Dartmouth workshops in 1956, through John McCarthy’s and Alan Turing’s symbolic approach to artificial intelligence, to the effect of cybernetics on contemporary technologies. It draws attention to Richard Dreyfuss’ and John Searle’s critique of AI, emphasising their meaning in redefining the differences between human and artificial intelligence. In the context of the AI revolution, the article poses a question about the future of design and technology, considering if the process of human adaptation keeps up with the development of machines. It also refers to ethical aspects of automation, and the growing importance of creative thinking in the face of technological progress. Keywords: artificial intelligence (AI), cybernetics, philosophy, history, design

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/icict43934.2018.9034313
A Study of Artificial Social Intelligence in Conversational Agents
  • Nov 1, 2018
  • Vibha Satyanarayana + 3 more

The main goal of Artificial Intelligence (AI) is to make a machine as intelligent as a human. While AI has advanced significantly over the past years, with its ability surpassing humans in several fields, the one thing that it still lacks is social awareness i.e. have the social skills of a human, a sense of what is appropriate and what isn't and make decisions based on that. These are highly subjective and there is no single set of rules to determine them. With the use of AI increasing at such a high rate that AI has become a part of people's everyday lives it is absolutely necessary for AI systems to know what is socially acceptable. In this paper, we have conducted a thorough and systematic study of the current state of the art for implementing social and emotional intelligence into a conversational agent.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 24
  • 10.3389/fgwh.2023.1084302
Understanding the impact of an AI-enabled conversational agent mobile app on users' mental health and wellbeing with a self-reported maternal event: a mixed method real-world data mHealth study.
  • Jun 2, 2023
  • Frontiers in Global Women's Health
  • Becky Inkster + 2 more

Maternal mental health care is variable and with limited accessibility. Artificial intelligence (AI) conversational agents (CAs) could potentially play an important role in supporting maternal mental health and wellbeing. Our study examined data from real-world users who self-reported a maternal event while engaging with a digital mental health and wellbeing AI-enabled CA app (Wysa) for emotional support. The study evaluated app effectiveness by comparing changes in self-reported depressive symptoms between a higher engaged group of users and a lower engaged group of users and derived qualitative insights into the behaviors exhibited among higher engaged maternal event users based on their conversations with the AI CA. Real-world anonymised data from users who reported going through a maternal event during their conversation with the app was analyzed. For the first objective, users who completed two PHQ-9 self-reported assessments (n = 51) were grouped as either higher engaged users (n = 28) or lower engaged users (n = 23) based on their number of active session-days with the CA between two screenings. A non-parametric Mann-Whitney test (M-W) and non-parametric Common Language effect size was used to evaluate group differences in self-reported depressive symptoms. For the second objective, a Braun and Clarke thematic analysis was used to identify engagement behavior with the CA for the top quartile of higher engaged users (n = 10 of 51). Feedback on the app and demographic information was also explored. Results revealed a significant reduction in self-reported depressive symptoms among the higher engaged user group compared to lower engaged user group (M-W p = .004) with a high effect size (CL = 0.736). Furthermore, the top themes that emerged from the qualitative analysis revealed users expressed concerns, hopes, need for support, reframing their thoughts and expressing their victories and gratitude. These findings provide preliminary evidence of the effectiveness and engagement and comfort of using this AI-based emotionally intelligent mobile app to support mental health and wellbeing across a range of maternal events and experiences.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.