From Apomediation to AImediation: Generative AI and the Reconfiguration of Informational Authority in Health Communication

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Objective:This conceptual paper explores the transition from apomediation to AIMediation, allowing patients or users to independently seek and access health information on their own, often using the internet and social networks, rather than relying exclusively on the intermediation of health professionals. It examines how generative artificial intelligence (GAI) reconfigures the dynamics of informational authority, access, and user autonomy in digital health environments in light of the increasing use of generative AI tools in healthcare contexts.Method:This study examined how mediation models in health information have changed over time. It uses Eysenbach’s framework and new developments in large language models (LLMs). A new model was created to compare intermediation, apomediation, and AImediation.Results:AImediation emerges as a new paradigm in which patients or users interact directly with AI tools such as ChatGPT, Claude, Perplexity, or Gemini to access compiled multi-source health information. While this model retains the user autonomy characteristic of apomediation, it centralizes information flows and removes peer-based social layers. Key challenges include algorithmic opacity, prompt dependence, and the risk of misinformation due to hallucinations or biased outputs.Conclusion:AImediation redefines how individuals access and evaluate health information, requiring critical engagement from users and responsible development by technology providers. This framework calls for more research to determine how it affects patient actions, the roles of professionals, and the ethical use of AI in healthcare.

ReferencesShowing 10 of 18 papers
  • Open Access Icon
  • Cite Count Icon 11
  • 10.1038/s41746-025-01543-z
A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians
  • Mar 22, 2025
  • npj Digital Medicine
  • Hirotaka Takita + 7 more

  • 10.3389/fdgth.2025.1584883
Perspective: advancing public health education by embedding AI literacy.
  • Jul 16, 2025
  • Frontiers in digital health
  • Jose A Acosta

  • 10.1007/978-3-031-71412-2_26
Unveiling the Power of Apomediation: Perspectives from Individuals Living with Autoimmune Disease
  • Oct 12, 2024
  • Eldridge Van Der Westhuizen + 2 more

  • Open Access Icon
  • Cite Count Icon 13
  • 10.3390/ijerph18084359
‘How to Botox’ on YouTube: Influence and Beauty Procedures in the Era of User-Generated Content
  • Apr 20, 2021
  • International Journal of Environmental Research and Public Health
  • Bárbara Castillo-Abdul + 2 more

  • Open Access Icon
  • PDF Download Icon
  • 10.17583/rimcis.13679
Dolor Lumbar en YouTube: Calidad y Valor Educativo
  • Jun 11, 2024
  • International and Multidisciplinary Journal of Social Sciences
  • Tomás Fontaines-Ruiz + 3 more

  • Cite Count Icon 28
  • 10.1001/jama.2020.4263
Addressing Medical Misinformation in the Patient-Clinician Relationship
  • Dec 15, 2020
  • JAMA
  • Vineet M Arora + 2 more

  • Cite Count Icon 64
From intermediation to disintermediation and apomediation: new models for consumers to access and assess the credibility of health information in the age of Web2.0.
  • Aug 28, 2007
  • Studies in health technology and informatics
  • Gunther Eysenbach

  • Cite Count Icon 2
  • 10.1109/iri.2009.5211587
About agents that reason by case (Preliminary report)
  • Aug 1, 2009
  • Philippe Besnard + 1 more

  • 10.1007/978-3-031-71318-7_18
A Comparative Analysis of Artificial Hallucinations in GPT-3.5 and GPT-4: Insights into AI Progress and Challenges
  • Oct 23, 2024
  • M N Mohammed + 4 more

  • Open Access Icon
  • Cite Count Icon 1007
  • 10.2196/jmir.1030
Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness
  • Aug 25, 2008
  • Journal of Medical Internet Research
  • Gunther Eysenbach

Similar Papers
  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Conference Article
  • 10.2118/221883-ms
Domain Driven Methodology Adopting Generative AI Application in Oil and Gas Drilling Sector
  • Nov 4, 2024
  • Daria Ponomareva + 5 more

In dynamic landscape of oil and gas drilling, Generative Artificial Intelligence (Generative AI) emerges as the indispensable ally, leveraging historical drilling data to revolutionize operational efficiency, mitigate risks, and empower informed decision-making. Existing Generative AI methods and tools, such as Large Language Models (LLMs) and agents, require tuning and customization to the oil and gas drilling sector. Applying Generative AI in drilling confronts hurdles such as ensuring data quality and navigating the complexity of operations. A methodology integrating Generative AI into drilling demands is comprehensive and interdisciplinary. Agile strategy revolves around constructing a network of specialized agents of LLMs, meticulously crafted to understand industry-specific terminology and intricate operational relationships rooted in drilling domain expertise. Every agent is linked to manuals, standards, specific operational drilling data source and it has unique instructions optimizing computational efficiency and driving cost savings. Moreover, to ensure cost-effectiveness, LLMs are selectively employed, while repetitive user inquiries are addressed through data retrieval from an aggregated storage. Consistent responses to user queries are provided through text and graphs revealing insights from drilling operations, standards, manuals, practices, and lessons learned. Applied methodology efficiently navigates inside the pre-processed user database relying on custom agents developed. Communication with the user is set in the form of chat framed within a web application, and queries on the database about hundreds of wells are answered in less than a minute. Methodology can analyze data and graphs by comparing Key Performance Indicators (KPIs). A wide range of graph output is represented by bar charts, scatter plots, and maps, including self-explaining charts like Time versus Depth Curve (TVD) with Non-Productive Time (TVD) events marked with details underneath. Understanding the data content, data preparation steps, and user needs is fundamental to a successful methodology application. The proposed Generative AI methodology is not just a tool for data interpretation, but a catalyst for real-time decision-making in complex drilling environments. Its integration into oil and gas drilling operations signifies a pivotal advancement, showcasing its transformative potential in revolutionizing the industry's landscape. This approach leads to notable cost reductions, improved resource utilization, and increased productivity, paving the way for a new era in drilling operations. A method driven by selective, cost-effective, and domain specific LLM agents stands poised to revolutionize drilling operations, seamlessly integrating generative AI to amplify efficiency and propel informed decision-making within the oil and gas drilling sector.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 138
  • 10.2196/53008
Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security Challenges
  • Mar 8, 2024
  • Journal of Medical Internet Research
  • Yan Chen + 1 more

As advances in artificial intelligence (AI) continue to transform and revolutionize the field of medicine, understanding the potential uses of generative AI in health care becomes increasingly important. Generative AI, including models such as generative adversarial networks and large language models, shows promise in transforming medical diagnostics, research, treatment planning, and patient care. However, these data-intensive systems pose new threats to protected health information. This Viewpoint paper aims to explore various categories of generative AI in health care, including medical diagnostics, drug discovery, virtual health assistants, medical research, and clinical decision support, while identifying security and privacy threats within each phase of the life cycle of such systems (ie, data collection, model development, and implementation phases). The objectives of this study were to analyze the current state of generative AI in health care, identify opportunities and privacy and security challenges posed by integrating these technologies into existing health care infrastructure, and propose strategies for mitigating security and privacy risks. This study highlights the importance of addressing the security and privacy threats associated with generative AI in health care to ensure the safe and effective use of these systems. The findings of this study can inform the development of future generative AI systems in health care and help health care organizations better understand the potential benefits and risks associated with these systems. By examining the use cases and benefits of generative AI across diverse domains within health care, this paper contributes to theoretical discussions surrounding AI ethics, security vulnerabilities, and data privacy regulations. In addition, this study provides practical insights for stakeholders looking to adopt generative AI solutions within their organizations.

  • Research Article
  • Cite Count Icon 2
  • 10.18502/htaa.v4i1.5861
E-Health Literacy: A Skill Needed in the Coronavirus Outbreak Crisis
  • Mar 31, 2021
  • Health Technology Assessment in Action
  • Meisam Dastani

The article's abstract is not available.

  • Research Article
  • 10.1038/s41746-025-01752-6
Evaluating evidence-based health information from generative AI using a cross-sectional study with laypeople seeking screening information
  • Jun 9, 2025
  • npj Digital Medicine
  • Felix G Rebitschek + 5 more

Large language models (LLMs) are used to seek health information. Guidelines for evidence-based health communication require the presentation of the best available evidence to support informed decision-making. We investigate the prompt-dependent guideline compliance of LLMs and evaluate a minimal behavioural intervention for boosting laypeople’s prompting. Study 1 systematically varied prompt informedness, topic, and LLMs to evaluate compliance. Study 2 randomized 300 participants to three LLMs under standard or boosted prompting conditions. Blinded raters assessed LLM response with two instruments. Study 1 found that LLMs failed evidence-based health communication standards. The quality of responses was found to be contingent upon prompt informedness. Study 2 revealed that laypeople frequently generated poor-quality responses. The simple boost improved response quality, though it remained below required standards. These findings underscore the inadequacy of LLMs as a standalone health communication tool. Integrating LLMs with evidence-based frameworks, enhancing their reasoning and interfaces, and teaching prompting are essential. Study Registration: German Clinical Trials Register (DRKS) (Reg. No.: DRKS00035228, registered on 15 October 2024).

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.clon.2025.103798
Artificial Intelligence in Health Care: A Rallying Cry for Critical Clinical Research and Ethical Thinking.
  • May 1, 2025
  • Clinical oncology (Royal College of Radiologists (Great Britain))
  • S M Bentzen

Artificial Intelligence in Health Care: A Rallying Cry for Critical Clinical Research and Ethical Thinking.

  • Research Article
  • 10.2196/79961
Evolving Health Information–Seeking Behavior in the Context of Google AI Overviews, ChatGPT, and Alexa: Interview Study Using the Think-Aloud Protocol
  • Oct 7, 2025
  • Journal of Medical Internet Research
  • Claire Wardle + 2 more

BackgroundOnline health information seeking is undergoing a major shift with the advent of artificial intelligence (AI)–powered technologies such as voice assistants and large language models (LLMs). While existing health information–seeking behavior models have long explained how people find and evaluate health information, less is known about how users engage with these newer tools, particularly tools that provide “one” answer rather than the resources to investigate a number of different sources.ObjectiveThis study aimed to explore how people use and perceive AI- and voice-assisted technologies when searching for health information and to evaluate whether these tools are reshaping traditional patterns of health information seeking and credibility assessment.MethodsWe conducted in-depth qualitative research with 27 participants (ages 19-80 years) using a think-aloud protocol. Participants searched for health information across 3 platforms—Google, ChatGPT, and Alexa—while verbalizing their thought processes. Prompts included both a standardized hypothetical scenario and a personally relevant health query. Sessions were transcribed and analyzed using reflexive thematic analysis to identify patterns in search behavior, perceptions of trust and utility, and differences across platforms and user demographics.ResultsParticipants integrated AI tools into their broader search routines rather than using them in isolation. ChatGPT was valued for its clarity, speed, and ability to generate keywords or summarize complex topics, even by users skeptical of its accuracy. Trust and utility did not always align; participants often used ChatGPT despite concerns about sourcing and bias. Google’s AI Overviews were met with caution—participants frequently skipped them to review traditional search results. Alexa was viewed as convenient but limited, particularly for in-depth health queries. Platform choice was influenced by the seriousness of the health issue, context of use, and prior experience. One-third of participants were multilingual, and they identified challenges with voice recognition, cultural relevance, and data provenance. Overall, users exhibited sophisticated “mix-and-match” behaviors, drawing on multiple tools depending on context, urgency, and familiarity.ConclusionsThe findings suggest the need for additional research into the ways in which search behavior in the era of AI- and voice-assisted technologies is becoming more dynamic and context-driven. While the sample size is small, participants in this study selectively engaged with AI- and voice-assisted tools based on perceived usefulness, not just trustworthiness, challenging assumptions that credibility is the primary driver of technology adoption. Findings highlight the need for digital health literacy efforts that help users evaluate both the capabilities and limitations of emerging tools. Given the rapid evolution of search technologies, longitudinal studies and real-time observation methods are essential for understanding how AI continues to reshape health information seeking.

  • Research Article
  • Cite Count Icon 6
  • 10.3389/fpubh.2021.706779
Factors Influencing the Accessibility and Reliability of Health Information in the Face of the COVID-19 Outbreak—A Study in Rural China
  • Dec 23, 2021
  • Frontiers in Public Health
  • Li Zhu + 2 more

Introduction: Rural residents have been shown to have limited access to reliable health information and therefore may be at higher risks for the adverse health effects of the COVID-19. The aim of this research is 2-fold: (1) to explore the impacts of demographic factors on the accessibility of health information; and (2) to assess the impacts of information channels on the reliability of health information accessed by rural residents in China during the COVID-19 outbreak.Methods: Mixed methods research was performed to provide a relatively complete picture about the accessibility and reliability of health information in rural China in the face of the COVID-19. A quantitative research was conducted through surveying 435 Chinese rural residents and a qualitative study was performed through collecting materials from one of the most popular social media application (WeChat) in China. The logistic regression techniques were used to examine the impacts of demographic factors on the accessibility of health information. The Content analysis was performed to describe and summarize qualitative materials to inform the impacts of information channels on the reliability of health information.Results: Age was found to positively associate with the accessibility of health information, while an opposite association was found between education and the accessibility of health information. Rural residents with monthly income between 3,001 CNY and 4,000 CNY were the least likely to access health information. Rural residents who worked/studied from home were more likely to access health information. Meanwhile, health information tended to be derived from non-official social media channels where rumors and unverified health information spread fast, and the elderly and less-educated rural residents were more likely to access health misinformation.Conclusions: Policy makers are suggested to adopt efficient measures to contain the spread of rumors and unverified health information on non-official social media platforms during the outbreak of a pandemic. More efforts should be devoted to assist the elderly and less-educated rural residents to access reliable health information in the face of a pandemic outbreak.

  • Conference Article
  • 10.2118/222046-ms
Innovating Oil and Gas Field Operations - Harnessing the Power of Generative Ai for Supporting Workforce Towards Achieving Autonomous Operations
  • Nov 4, 2024
  • Nagaraju Reddicharla + 1 more

In today's dynamic and competitive oil and gas industry, the integration of Artificial Intelligence (AI) has emerged as a game-changer, offering unparalleled opportunities for optimization, cost reduction, and operational excellence. The main objective of autonomous operations is to minimize manual interactions and maximize self-directed plant operations. ADNOC Onshore has implemented generative AI agents in daily maintenance and production operations to boost workforce productivity in the journey of achieving autonomous operations. This paper explains the use cases, challenges, AI architecture &amp; data security in deployment. Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. GPT-4 Turbo is a large multimodal model (accepting text or image inputs and generating text) that can solve difficult problems with greater accuracy and advanced reasoning capabilities. The scope includes empowering reliability, maintenance, and operations professionals to draw insights from equipment manuals, asset operating manuals and operating procedures, maintenance records, and safety &amp; integrity manuals. This in-house solution with support across structured and unstructured data, an LLM-agnostic architecture, deterministic responses with source references, and granular access controls. The solution has been integrated ERP SAP system and sensor time series PI system, data historians for integrated context. A unique automated contextualization engine has been used based on oil and gas specific vocabulary to bring context to their operations. A conversational interactive agent has been built for user interactions. The maintenance and operations engineer can receive suggestions on the proper steps to identify the root cause based on OEM product manuals, previous events, and current performance. This Generative AI solution accelerates time to insight for operators by equipping teams to streamline maintenance operations and Investigate maintenance records with generative AI to troubleshoot operations challenges more efficiently. The internal study showed that operational productivity has increased by 20% after this solution's implementation. For the model to understand industrial environments, it would require retraining the model on industrial data. Using existing models on uncontextualized, unstructured industrial data significantly increases the risk of incorrect and untrustworthy answers – referred to as AI hallucinations. Another significant challenge lies in the dependence on the quality and quantity of available data for training. AI models require extensive and representative datasets to produce accurate and reliable predictions. Large language models are a type of artificial intelligence (AI) model designed to understand and generate human language. These models are built upon deep learning architectures, particularly transformer architectures. Generative AI can play a significant role in oil and gas asset operations towards the goal of achieving autonomous operations.

  • Research Article
  • 10.3389/fdgth.2025.1644041
Artificial intelligence in healthcare: applications, challenges, and future directions. A narrative review informed by international, multidisciplinary expertise
  • Nov 6, 2025
  • Frontiers in Digital Health
  • Ata Mohajer-Bastami + 24 more

Objectives This narrative review evaluates the role of artificial intelligence (AI) in healthcare, summarizing its historical evolution, current applications across medical and surgical specialties, and implications for allied health professions and biomedical research. Methods We conducted a structured literature search in Ovid MEDLINE (2018–2025) using terms related to AI, machine learning, deep learning, large language models, generative AI, and healthcare applications. Priority was given to peer-reviewed articles providing novel insights, multidisciplinary perspectives, and coverage of underrepresented domains. Key findings AI is increasingly applied to diagnostics, surgical navigation, risk prediction, and personalized medicine. It also holds promise in allied health, drug discovery, genomics, and clinical trial optimization. However, adoption remains limited by challenges including bias, interpretability, legal frameworks, and uneven global access. Contributions This review highlights underexplored areas such as generative AI and allied health professions, providing an integrated multidisciplinary perspective. Conclusions With careful regulation, clinician-led design, and global equity considerations, AI can augment healthcare delivery and research. Future work must focus on robust validation, responsible implementation, and expanding education in digital medicine.

  • Research Article
  • 10.55041/isjem03936
A Review of Current Concerns and Mitigation Strategies on Generative AI and LLMs
  • Jun 3, 2025
  • International Scientific Journal of Engineering and Management
  • Ruchika Ruchika

The upcoming of the large language models and generative artificial intelligence had Completely change the way in which we generate and understand language, and also start the beginning of a new phase in AI-driven applications. This review paper over see the advancements and changes that have occurred over time, providing a thorough assessment of generative artificial intelligence and large language models, while we also look upon their impactful potential across different areas. The first section of the research focuses on the changes of extensive language models and generative AI, and we will try to focus upon developments in models like GPT-4 and others. These models have shown their ability number of times from applications in various sectors, from automated content generation to acurate conversational agents. They are characterized by their capability to produce text that is both coherent and contextually appropriate. However, despite their accuracy, strengths, generative artificial intelligence and large language models face critical ethical, technological, and societal issues. Some main stream concern arises from the biases present in the training data, which can cause and lead to social inequalities.Here we looks into the causes of these biases and their implications, stressing the need for comprehensive frameworks to identify and mitigate them. Keywords: backpropagation, bert, diffusion models, explainable ai (xai), generative ai, image synthesis, long short-term memory (lstm), natural language processing (nlp), neural network, recurrent neural network (rnn), small language model (sml), and transformer model.

  • Research Article
  • 10.55041/isjem03927
A Review of Current Concerns and Mitigation Strategies on Generative AI and LLMs
  • Jun 3, 2025
  • International Scientific Journal of Engineering and Management
  • Ruchika Ruchika

The upcoming of the large language models and generative artificial intelligence had Completely change the way in which we generate and understand language, and also start the beginning of a new phase in AI-driven applications. This review paper over see the advancements and changes that have occurred over time, providing a thorough assessment of generative artificial intelligence and large language models, while we also look upon their impactful potential across different areas. The first section of the research focuses on the changes of extensive language models and generative AI, and we will try to focus upon developments in models like GPT-4 and others. These models have shown their ability number of times from applications in various sectors, from automated content generation to acurate conversational agents. They are characterized by their capability to produce text that is both coherent and contextually appropriate. However, despite their accuracy, strengths, generative artificial intelligence and large language models face critical ethical, technological, and societal issues. Some main stream concern arises from the biases present in the training data, which can cause and lead to social inequalities.Here we looks into the causes of these biases and their implications, stressing the need for comprehensive frameworks to identify and mitigate them. Keywords: backpropagation, bert, diffusion models, explainable ai (xai), generative ai, image synthesis, long short-term memory (lstm), natural language processing (nlp), neural network, recurrent neural network (rnn), small language model (sml), and transformer model.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.21203/rs.3.rs-3661764/v1
Faithful AI in Medicine: A Systematic Review with Large Language Models and Beyond
  • Dec 4, 2023
  • Research Square
  • Qianqian Xie + 5 more

ObjectiveWhile artificial intelligence (AI), particularly large language models (LLMs), offers significant potential for medicine, it raises critical concerns due to the possibility of generating factually incorrect information, leading to potential long-term risks and ethical issues. This review aims to provide a comprehensive overview of the faithfulness problem in existing research on AI in healthcare and medicine, with a focus on the analysis of the causes of unfaithful results, evaluation metrics, and mitigation methods.Materials and MethodsUsing PRISMA methodology, we sourced 5,061 records from five databases (PubMed, Scopus, IEEE Xplore, ACM Digital Library, Google Scholar) published between January 2018 to March 2023. We removed duplicates and screened records based on exclusion criteria.ResultsWith 40 leaving articles, we conducted a systematic review of recent developments aimed at optimizing and evaluating factuality across a variety of generative medical AI approaches. These include knowledge-grounded LLMs, text-to-text generation, multimodality-to-text generation, and automatic medical fact-checking tasks.DiscussionCurrent research investigating the factuality problem in medical AI is in its early stages. There are significant challenges related to data resources, backbone models, mitigation methods, and evaluation metrics. Promising opportunities exist for novel faithful medical AI research involving the adaptation of LLMs and prompt engineering.ConclusionThis comprehensive review highlights the need for further research to address the issues of reliability and factuality in medical AI, serving as both a reference and inspiration for future research into the safe, ethical use of AI in medicine and healthcare.

  • Research Article
  • Cite Count Icon 5
  • 10.1007/s11930-024-00397-y
The Impact of Artificial Intelligence on Human Sexuality: A Five-Year Literature Review 2020–2024
  • Dec 4, 2024
  • Current Sexual Health Reports
  • Nicola Döring + 4 more

Purpose of ReviewMillions of people now use generative artificial intelligence (GenAI) tools in their daily lives for a variety of purposes, including sexual ones. This narrative literature review provides the first scoping overview of current research on generative AI use in the context of sexual health and behaviors.Recent FindingsThe review includes 88 peer-reviewed English language publications from 2020 to 2024 that report on 106 studies and address four main areas of AI use in sexual health and behaviors among the general population: (1) People use AI tools such as ChatGPT to obtain sexual information and education. We identified k = 14 publications that evaluated the quality of AI-generated sexual health information. They found high accuracy and completeness. (2) People use AI tools such as ChatGPT and dedicated counseling/therapy chatbots to solve their sexual and relationship problems. We identified k = 16 publications providing empirical results on therapists’ and clients’ perspectives and AI tools’ therapeutic capabilities with mixed but overall promising results. (3) People use AI tools such as companion and adult chatbots (e.g., Replika) to experience sexual and romantic intimacy. We identified k = 22 publications in this area that confirm sexual and romantic gratifications of AI conversational agents, but also point to risks such as emotional dependence. (4) People use image- and video-generating AI tools to produce pornography with different sexual and non-sexual motivations. We found k = 36 studies on AI pornography that primarily address the production, uses, and consequences of – as well as the countermeasures against – non-consensual deepfake pornography. This sort of content predominantly victimizes women and girls whose faces are swapped into pornographic material and circulated without their consent. Research on ethical AI pornography is largely missing.SummaryGenerative AI tools present new risks and opportunities for human sexuality and sexual health. More research is needed to better understand the intersection of GenAI and sexuality in order to a) help people navigate their sexual GenAI experiences, b) guide sex educators, counselors, and therapists on how to address and incorporate AI tools into their professional work, c) advise AI developers on how to design tools that avoid harm, d) enlighten policymakers on how to regulate AI for the sake of sexual health, and e) inform journalists and knowledge workers on how to report about AI and sexuality in an evidence-based manner.

  • Research Article
  • Cite Count Icon 2
  • 10.1371/journal.pone.0311410
Improving citizen-government interactions with generative artificial intelligence: Novel human-computer interaction strategies for policy understanding through large language models.
  • Dec 17, 2024
  • PloS one
  • Lixin Yun + 2 more

Effective communication of government policies to citizens is crucial for transparency and engagement, yet challenges such as accessibility, complexity, and resource constraints obstruct this process. In the digital transformation and Generative AI era, integrating Generative AI and artificial intelligence technologies into public administration has significantly enhanced government governance, promoting dynamic interaction between public authorities and citizens. This paper proposes a system leveraging the Retrieval-Augmented Generation (RAG) technology combined with Large Language Models (LLMs) to improve policy communication. Addressing challenges of accessibility, complexity, and engagement in traditional dissemination methods, our system uses LLMs and a sophisticated retrieval mechanism to generate accurate, comprehensible responses to citizen queries about policies. This novel integration of RAG and LLMs for policy communication represents a significant advancement over traditional methods, offering unprecedented accuracy and accessibility. We experimented with our system with a diverse dataset of policy documents from both Chinese and US regional governments, comprising over 200 documents across various policy topics. Our system demonstrated high accuracy, averaging 85.58% for Chinese and 90.67% for US policies. Evaluation metrics included accuracy, comprehensibility, and public engagement, measured against expert human responses and baseline comparisons. The system effectively boosted public engagement, with case studies highlighting its impact on transparency and citizen interaction. These results indicate the system's efficacy in making policy information more accessible and understandable, thus enhancing public engagement. This innovative approach aims to build a more informed and participatory democratic process by improving communication between governments and citizens.

More from: Journal of Primary Care & Community Health
  • Research Article
  • 10.1177/21501319251339375
Being Otherness: A Phenomenological Journey of Stroke Survivors’ Stigma
  • Oct 31, 2025
  • Journal of Primary Care & Community Health
  • I Gede Juanamasta + 4 more

  • Research Article
  • 10.1177/21501319251383427
Primary Care Engagement and Post-Hospitalization Outcomes Among Cape Verdean Adults
  • Oct 22, 2025
  • Journal of Primary Care & Community Health
  • Lucas X Marinacci + 13 more

  • Research Article
  • 10.1177/21501319251369991
Cost and Workflow Impact of a Primary Care Based Multi-Level Pediatric Oral Health Intervention
  • Oct 22, 2025
  • Journal of Primary Care & Community Health
  • Johnie Rose + 5 more

  • Research Article
  • 10.1177/21501319251384539
Assessing Family Medicine Obstetrics Training Needs to Strengthen Maternal Health in Underserved and Rural US Communities
  • Oct 16, 2025
  • Journal of Primary Care & Community Health
  • Matthew D Kearney + 8 more

  • Research Article
  • 10.1177/21501319251380617
A Qualitative Study of Provider Perspectives on Barriers and Facilitators to Optimal Postpartum Care for High-Risk Patients
  • Sep 24, 2025
  • Journal of Primary Care & Community Health
  • Kavita Vani + 4 more

  • Research Article
  • 10.1177/21501319251381878
From Apomediation to AImediation: Generative AI and the Reconfiguration of Informational Authority in Health Communication
  • Sep 24, 2025
  • Journal of Primary Care & Community Health
  • Luis M Romero-Rodriguez + 1 more

  • Research Article
  • 10.1177/21501319251363661
A Qualitative Exploration of Barriers and Enablers to Health Equity Competencies in Clinical Practice: Application of the Health Equity Implementation Framework
  • Sep 20, 2025
  • Journal of Primary Care & Community Health
  • Margaret Salisu + 6 more

  • Research Article
  • 10.1177/21501319251356768
Access to Basic Needs and Healthcare by People Experiencing Unsheltered Homelessness
  • Sep 20, 2025
  • Journal of Primary Care & Community Health
  • Alexis Coulourides Kogan + 5 more

  • Research Article
  • 10.1177/21501319251372514
Primary Care Perspectives on a Pediatric Mental Healthcare Access (PMHCA) Program
  • Sep 19, 2025
  • Journal of Primary Care & Community Health
  • Susanna Ciccolari Micaldi + 7 more

  • Research Article
  • 10.1177/21501319251371090
Lessons From Federally Qualified Health Centers in the US Territories on Managing Non-Communicable Diseases in the Setting of Disasters: A Qualitative Study Using the Exploration, Preparation, Implementation, Sustainment (EPIS) Framework
  • Sep 19, 2025
  • Journal of Primary Care & Community Health
  • Saria Hassan + 7 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon