Advances, opportunities, and challenges of using modern artificial general intelligence and artificial intelligence-generated content technologies in depression and related disorders – Systematic review

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Artificial general intelligence (AGI) and artificial intelligence-generated content (AIGC) technologies are transforming mental health care by enabling early diagnosis, personalized treatment, and innovative therapeutic interventions. This systematic review evaluates the applications, benefits, and challenges of AGI and AIGC in diagnosing and managing depression and related disorders. A comprehensive literature search was conducted across PubMed, PsycINFO, EMBASE, Scopus, and Web of Science, with the last update on June 2, 2024. Studies were included if they assessed the role of AGI or AIGC in screening, diagnosis, treatment, or monitoring of depression. Exclusion criteria included non-English publications, review articles, and studies unrelated to artificial intelligence (AI) applications in mental health. Risk of bias was evaluated using standardized assessment tools, and findings were synthesized qualitatively. Of 246 identified articles, 34 met the inclusion criteria. Key findings indicate that AGI enhances diagnostic accuracy by integrating multimodal data (e.g. neuroimaging, wearable devices, and behavioral analysis), whereas AI-driven tools improve treatment personalization and real-time monitoring. AI-assisted psychotherapy and drug discovery models show promise in optimizing mental health interventions. However, challenges remain regarding algorithmic bias, data privacy, regulatory compliance, and ethical concerns. AGI and AIGC offer transformative potential in mental health care, improving diagnostic precision and personalized treatment strategies. However, further research is required to validate AI-driven interventions, mitigate bias, and establish ethical frameworks for clinical integration. Ensuring equitable access and robust validation will be essential for the responsible adoption of AI in psychiatry.

Similar Papers
  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 34
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 137
  • 10.3389/fpsyg.2022.971044
Artificial intelligence technologies and compassion in healthcare: A systematic scoping review.
  • Jan 17, 2023
  • Frontiers in psychology
  • Elizabeth Morrow + 6 more

Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.

  • Research Article
  • Cite Count Icon 162
  • 10.3390/cancers12123532
Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine.
  • Nov 26, 2020
  • Cancers
  • Ryuji Hamamoto + 19 more

Simple SummaryArtificial intelligence (AI) technology has been advancing rapidly in recent years and is being implemented in society. The medical field is no exception, and the clinical implementation of AI-equipped medical devices is steadily progressing. In particular, AI is expected to play an important role in realizing the current global trend of precision medicine. In this review, we introduce the history of AI as well as the state of the art of medical AI, focusing on the field of oncology. We also describe the current status of the use of AI for drug discovery in the oncology field. Furthermore, while AI has great potential, there are still many issues that need to be resolved; therefore, we would provide details on current medical AI problems and potential solutions.In recent years, advances in artificial intelligence (AI) technology have led to the rapid clinical implementation of devices with AI technology in the medical field. More than 60 AI-equipped medical devices have already been approved by the Food and Drug Administration (FDA) in the United States, and the active introduction of AI technology is considered to be an inevitable trend in the future of medicine. In the field of oncology, clinical applications of medical devices using AI technology are already underway, mainly in radiology, and AI technology is expected to be positioned as an important core technology. In particular, “precision medicine,” a medical treatment that selects the most appropriate treatment for each patient based on a vast amount of medical data such as genome information, has become a worldwide trend; AI technology is expected to be utilized in the process of extracting truly useful information from a large amount of medical data and applying it to diagnosis and treatment. In this review, we would like to introduce the history of AI technology and the current state of medical AI, especially in the oncology field, as well as discuss the possibilities and challenges of AI technology in the medical field.

  • Research Article
  • 10.52554/kjcl.2024.107.225
인공지능의 민사책임에 대한 소고
  • Jun 30, 2024
  • The Korean Association of Civil Law
  • Sookyoung Lee

The recent development of artificial intelligence (AI) technology is bringing about changes at a faster pace and on a larger scale than any other period in human history. With technological advancements overcoming the limitations of medical AI through training with databases, AI technology has made remarkable progress since the inception of deep learning for image processing with convolutional neural networks (CNN) in 2012. The recent advancements in natural language processing (NLP) have accelerated the utilization of AI through sophisticated natural language processing, enabling machines to identify and understand data regardless of the complexity of the language. This has laid the foundation for the rapid and precise development of generative AI. In the era where generative AI is being utilized without pausing in its developmental speed, we considered the civil liability of AI in our civil law principles, taking into account the inherent characteristics of AI such as unpredictability, opacity, and the black box effect. To do this, we first examined the legal liability considering the stages of AI technology development in discussing the tort liability caused by AI. Even “Weak AI,” created by AI developers, may fall under “Gefahr,” and while not all types, some may apply to strict liability in terms of risk liability. Furthermore, while reviewing civil liability applicable to AI under fault-based and no-fault liability, we also looked at the trends in the EU comparatively. In discussing no-fault liability, particularly under the Product Liability Act, we examined the possibility and implications of applying risk liability to pharmaceutical manufacturing using generative AI technology as a representative example to overcome the limitations of the existing Product Liability Act. Humanity currently lives in an era of rapid technological development and exploding big data, enjoying numerous benefits due to these advancements. As user convenience improves and massive added value is created through technological progress, the meaning of risk liability in the realm of civil liability can gain more significance. Generative AI has already drastically reduced the costs and time required for new drug development, providing substantial profits to pharmaceutical companies. However, even if the existing Product Liability Act is applied, it may be difficult to adequately remedy the harm to victims due to the reasonable alternative possibility defense regarding design defects. In the era of generative AI, we examined the possibility of applying enhanced risk liability by assuming the case of pharmaceutical manufacturing.

  • Research Article
  • Cite Count Icon 1
  • 10.47473/2020rmm0150
Integrating Generative AI and Large Language Models in Financial Sector Risk Management: Regulatory Frameworks and Practical Applications
  • Apr 1, 2025
  • Risk Management Magazine
  • Valentina Lagasio + 2 more

The rapid advancement of artificial intelligence (AI) technologies, particularly generative AI and large language models (LLMs), has ushered in a new era of possibilities for the financial sector. This paper explores the integration of these cutting-edge technologies into financial sector risk management, examining both the potential applications and the necessary regulatory frameworks. We provide a comprehensive analysis of how generative AI and LLMs can revolutionize risk assessment, fraud detection, market analysis, and regulatory compliance. The study delves into the technical aspects of these AI models, their implementation challenges, and the implications for existing risk management practices. Furthermore, we propose a novel framework for the responsible adoption of AI in financial risk management, addressing concerns related to model interpretability, data privacy, and algorithmic bias. Our findings suggest that while generative AI and LLMs offer unprecedented opportunities for enhancing risk management capabilities, they also necessitate a recalibration of regulatory approaches to ensure financial stability and consumer protection. This research contributes to the growing body of literature on AI in finance and provides actionable insights for practitioners, policymakers, and researchers in the field.

  • Research Article
  • Cite Count Icon 1
  • 10.1108/tg-08-2025-0240
Generative AI and the urban AI policy challenges ahead: Trustworthy for whom?
  • Dec 4, 2025
  • Transforming Government: People, Process and Policy
  • Igor Calzada

Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.

  • Research Article
  • 10.1108/yc-10-2024-2303
AI governance on young consumers in higher education: a content analysis of policies for generative AI
  • Aug 25, 2025
  • Young Consumers
  • Ashley Tong + 3 more

Purpose As generative artificial intelligence (AI) technologies continue to advance and become more prevalent in higher education, addressing the ethical concerns associated with their use is essential. This study emphasizes the need for robust AI governance as more young consumers increasingly use generative AI for various applications. This paper aims to examine the ethical challenges posed by generative AI and review the AI policies in higher education to regulate young consumers use of generative AI, focusing on the ethical use of AI from foundational principles to sustainable governance. Design/methodology/approach Through a content analysis of literature on generative AI policies in higher education published between 2020 and 2024, this research aims to explore a more holistic approach to integrating generative AI into the educational process. The analysis examines academic policies and governance framework from 28 journal papers regarding generative AI tools in higher education. Data were collected from publicly accessible sources, such as Scopus, Emerald Insights, ProQuest, Web of Science and ScienceDirect. Findings This study analyses ten elements of the governance framework to identify potential AI governance and policy setting, benefiting stakeholders aiming at enhancing the regulatory framework of generative AI use in higher education. The discussions indicate a generally balanced yet cautious approach to integrating generative AI technology, especially considering ethical issues, inherent limitations and data privacy concerns. Originality/value The findings contribute to ongoing discussions to strengthen universities’ responses to new academic challenges posed by the use of generative AI and promote high AI ethical standards across educational sectors.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.3390/rel15010079
Buddhist Transformation in the Digital Age: AI (Artificial Intelligence) and Humanistic Buddhism
  • Jan 9, 2024
  • Religions
  • Yutong Zheng

Humanistic Buddhism is one of the mainstreams of modern Buddhism, with special emphasis on the humanistic dimension. With the development of artificial intelligence (AI) technology, Humanistic Buddhism is also at an important stage of modernization and transformation, thus facing a continuous negotiation between religious values and technological innovations. This paper first argues that AI is technically beneficial to the propagation of Buddhism by citing several cases in which AI technology has been used in Buddhism. Then, by comparing Master Hsing Yun’s Buddhist ethics to “Posthuman” ethics, it points out that the theories of Humanistic Buddhism share similarities with AI and Posthuman ethics. Among them, Master Hsing Yun’s theory of “the nature of insentient beings” provides an important theoretical reference for the question of “whether AI can become a Buddha”. From the technical and ethical dimensions, it points out that the interaction between Humanistic Buddhism and AI can promote original uses or implementations of AI technology. However, it should also be noted that compared to the cases of “Artificial Narrow Intelligence”discussed in the paper, the “Strong AI” could lead to much more ethical crises. It is also likely to cause the cult of science and technology, and thus subvert the humanistic tradition of Buddhism with a new instrumental rationality. In addition, there are some potential pitfalls that Humanistic Buddhism may encounter when using AI. Hence, while it is necessary to encourage the use of technologies such as AI in contemporary Buddhism, it is also important for Buddhism to keep a critical distance from digital technologies.

  • Front Matter
  • Cite Count Icon 2
  • 10.1016/j.jaip.2023.04.034
Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?
  • Jul 1, 2023
  • The Journal of Allergy and Clinical Immunology: In Practice
  • Jay M Portnoy + 1 more

Can an Artificial Intelligence (AI) Be an Author on a Medical Paper?

  • Research Article
  • 10.28945/5354
Is Knowledge Management (Finally) Extractive? – Fuller’s Argument Revisited in the Age of AI
  • Jan 1, 2024
  • Interdisciplinary Journal of Information, Knowledge, and Management
  • Norman A Mooradian

Aim/Purpose: The rise of modern artificial intelligence (AI), in particular, machine learning (ML), has provided new opportunities and directions for knowledge management (KM). A central question for the future of KM is whether it will be dominated by an automation strategy that replaces knowledge work or whether it will support a knowledge-enablement strategy that enhances knowledge work and uplifts knowledge workers. This paper addresses this question by re-examining and updating a critical argument against KM by the sociologist of science Steve Fuller (2002), who held that KM was extractive and exploitative from its origins. Background: This paper re-examines Fuller’s argument in light of current developments in artificial intelligence and knowledge management technologies. It reviews Fuller’s arguments in its original context wherein expert systems and knowledge engineering were influential paradigms in KM, and it then considers how the arguments put forward are given new life in light of current developments in AI and efforts to incorporate AI in the KM technical stack. The paper shows that conceptions of tacit knowledge play a key role in answering the question of whether an automating or enabling strategy will dominate. It shows that a better understanding of tacit knowledge, as reflected in more recent literature, supports an enabling vision. Methodology: The paper uses a conceptual analysis methodology grounded in epistemology and knowledge studies. It reviews a set of historically important works in the field of knowledge management and identifies and analyzes their core concepts and conceptual structure. Contribution: The paper shows that KM has had a faulty conception of tacit knowledge from its origins and that this conception lends credibility to an extractive vision supportive of replacement automation strategies. The paper then shows that recent scholarship on tacit knowledge and related forms of reasoning, in particular, abduction, provide a more theoretically robust conception of tacit knowledge that supports the centrality of human knowledge and knowledge workers against replacement automation strategies. The paper provides new insights into tacit knowledge and human reasoning vis-à-vis knowledge work. It lays the foundation for KM as a field with an independent, ethically defensible approach to technology-based business strategies that can leverage AI without becoming a merely supporting field for AI. Findings: Fuller’s argument is forceful when updated with examples from current AI technologies such as deep learning (DL) (e.g., image recognition algorithms) and large language models (LLMs) such as ChatGPT. Fuller’s view that KM presupposed a specific epistemology in which knowledge can be extracted into embodied (computerized) but disembedded (decontextualized) information applies to current forms of AI, such as machine learning, as much as it does to expert systems. Fuller’s concept of expertise is narrower than necessary for the context of KM but can be expanded to other forms of knowledge work. His account of the social dynamics of expertise as professionalism can be expanded as well and fits more plausibly in corporate contexts. The concept of tacit knowledge that has dominated the KM literature from its origins is overly simplistic and outdated. As such, it supports an extractive view of KM. More recent scholarship on tacit knowledge shows it is a complex and variegated concept. In particular, current work on tacit knowledge is developing a more theoretically robust and detailed conception of human knowledge that shows its centrality in organizations as a driver of innovation and higher-order thinking. These new understandings of tacit knowledge support a non-extractive, human enabling view of KM in relation to AI. Recommendations for Practitioners: Practitioners can use the findings of the paper to consider ways to implement KM technologies in ways that do not neglect the importance of tacit knowledge in automation projects (which neglect often leads to failure). They should also consider how to enhance and fully leverage tacit knowledge through AI technologies and augment human knowledge. Recommendation for Researchers: Researchers can use these findings as a conceptual framework in research concerning the impact of AI on knowledge work. In particular, the distinction between replacement and enabling technologies, and the analysis of tacit knowledge as a structural concept, can be used to categorize and analyze AI technologies relative to KM research objectives. Impact on Society: The potential of AI on employment in the knowledge economy is a major issue in the ethics of AI literature and is widely recognized in the popular press as one of the pressing societal risks created by AI and specific types such as generative AI. This paper shows that KM, as a field of research and practice, does not need to and should not add to the risks created by automation-replacement strategies. Rather, KM has the conceptual resources to pursue a (human) knowledge enablement approach that can stand as a viable alternative to the automation-replacement vision. Future Research: The findings of the paper suggest a number of research trajectories. They include: Further study of tacit knowledge and its underlying cognitive mechanisms and structures in relation to knowledge work and KM objectives. Research into different types of knowledge work and knowledge processes and the role that tacit and explicit knowledge play. Research into the relation between KM and automation in terms of KM’s history and current technical developments. Research into how AI arguments knowledge works and how KM can provide an enabling framework.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 9
  • 10.1515/ijdlg-2024-0015
Appraising Regulatory Framework Towards Artificial General Intelligence (AGI) Under Digital Humanism
  • Oct 28, 2024
  • International Journal of Digital Law and Governance
  • Le Cheng + 1 more

The explosive advancement of contemporary artificial intelligence (AI) technologies, typified by ChatGPT, is steering humans towards an uncontrollable trajectory to artificial general intelligence (AGI). Against the backdrop of a series of transformative breakthroughs, big tech companies such as OpenAI and Google have initiated an “AGI race” on a supranational level. As technological power becomes increasingly absolute, structural challenges may erupt with an unprecedented velocity, potentially resulting in disorderly expansion and even malignant development of AI technologies. To preserve the dignity and safety of human-beings in a brand-new AGI epoch, it is imperative to implement regulatory guidelines to limit the applications of AGI within the confines of human ethics and rules to further counteract the potential downsides. To promote the benevolent evolution of AGI, the principles of Humanism should be underscored and the connotation of Digital Humanism should be further enriched. Correspondingly, the current regulatory paradigm for generative AI may also be overhauled under the tenet of Digital Humanism to adapt to the quantum leaps and subversive shifts produced by AGI in the future. Positioned at the nexus of legal studies, computer science, and moral philosophy, this study therefore charts a course for a synthetic regulation framework of AGI under Digital Humanism.

  • Research Article
  • 10.53819/81018102t4367
Adoption of Artificial Intelligence Technology and Its Impact on Insurance Company Performance in Kenya
  • Dec 16, 2025
  • Journal of Marketing and Communication
  • Benjamin Abongo

This study investigates the adoption of artificial intelligence (AI) technologies and their impact on the performance of insurance companies in Kenya. While AI has been widely acknowledged for improving operational efficiency, risk management, and regulatory compliance, limited empirical evidence exists on its measurable influence within the insurance sector. A descriptive research design was employed, focusing on 71 insurance companies registered with the Insurance Regulatory Authority (IRA). Both qualitative and quantitative data were collected to assess the relationship between AI adoption and organizational performance. The results indicate that AI adoption has a significant positive effect on performance, explaining 57.9% of the variability observed. Technologies such as generative AI, machine learning and deep learning, blockchain, natural language processing (NLP), computer vision, and IoT were found to contribute substantially to operational improvements and customer service delivery. The findings highlight the strategic importance of AI integration in enhancing competitiveness and efficiency within Kenya’s insurance industry. Broader adoption of AI technologies is recommended to strengthen performance outcomes across the sector. This study provides empirical evidence on the relevance of AI adoption in the Kenyan insurance industry, addressing a critical gap in existing literature and offering insights for both practitioners and policymakers. Keywords: Artificial Intelligence, AI Adoption, Insurtech Artificial Intelligence Technology

  • Research Article
  • 10.54660/ijmor.2025.4.1.125-136
The Decision-Making Process for Selecting Online Travel Agencies by Thai Gen Y Tourists
  • Jan 1, 2025
  • International Journal of Management and Organizational Research
  • Felix Chisomebi Okwaraoha

By improving accuracy, efficiency, and predictive power, the incorporation of Artificial Intelligence (AI) into financial models has revolutionized conventional financial analysis. AI-driven models process massive datasets, find patterns, and produce insights that enhance financial decision-making by utilizing machine learning (ML), deep learning (DL), and natural language processing (NLP). Conventional financial models frequently find it difficult to adjust to changing market conditions because they are based on statistical techniques and previous data. However, financial institutions can improve risk assessment, portfolio management, and fraud detection thanks to AI's adaptive learning, real-time processing, and automation. By identifying irregularities and forecasting market volatility based on past and current data, AI-powered algorithms improve risk management. Support vector machines (SVM), neural networks (NN), and reinforcement learning (RL) are examples of machine learning models that enhance credit score and give lenders more accurate information about a borrower's dependability. Additionally, algorithmic trading minimizes human error and maximizes earnings by using AI to evaluate market trends and execute deals at the best times. Financial institutions can extract insights from news stories, social media, and analyst reports by using natural language processing (NLP) in sentiment analysis. This helps them make well-informed investment decisions. Furthermore, through the analysis of transactional data, generative AI and large language models (LLMs) improve financial reporting, automate compliance monitoring, and identify fraudulent activity. AI-powered robo-advisors democratize financial planning for individual investors by offering tailored investment suggestions. Notwithstanding its benefits, there are drawbacks to incorporating AI into financial models, such as issues with algorithmic bias, data privacy, computing costs, and regulatory compliance. Maintaining openness in decision-making procedures and ensuring the ethical application of AI continue to be crucial issues. A promising approach to improving interpretability and confidence in AI-driven financial systems is explainable AI (XAI). AI's involvement in capital allocation, asset pricing, and financial forecasting will grow as it develops further, spurring efficiency and innovation in the financial industry. Future studies should concentrate on enhancing the interpretability of AI, developing regulatory frameworks, and creating hybrid AI models that integrate cutting-edge machine learning methods with conventional financial theories. Global financial ecosystems are changing as a result of the confluence of artificial intelligence (AI), big data, and financial technology (FinTech), opening the door for more intelligent and robust financial models.

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.