• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Human Experts Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
8825 Articles

Published in last 50 years

Related Topics

  • Domain Experts
  • Domain Experts
  • Trained Experts
  • Trained Experts
  • Learning Experts
  • Learning Experts
  • Expert Operators
  • Expert Operators

Articles published on Human Experts

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
8353 Search results
Sort by
Recency
Comparing the persuasiveness of role-playing large language models and human experts on polarized U.S. political issues

Abstract Advances in large language models (LLMs) could significantly disrupt political communication. In a large-scale pre-registered experiment (n = 4955), we prompted GPT-4 to generate persuasive messages impersonating the language and beliefs of U.S. political parties—a technique we term “partisan role-play”—and directly compared their persuasiveness to that of human persuasion experts. In aggregate, the persuasive impact of role-playing messages generated by GPT-4 was not significantly different from that of non-role-playing messages. However, the persuasive impact of GPT-4 rivaled, and on some issues exceeded, that of the human experts. Taken together, our findings suggest that—contrary to popular concern—instructing current LLMs to role-play as partisans offers limited persuasive advantage, but also that current LLMs can rival and even exceed the persuasiveness of human experts. These results potentially portend widespread adoption of AI tools by persuasion campaigns, with important implications for the role of AI in politics and democracy.

Read full abstract
  • Journal IconAI & SOCIETY
  • Publication Date IconJul 16, 2025
  • Author Icon Kobi Hackenburg + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Diagnostic accuracy differences in detecting wound maceration between humans and artificial intelligence: the role of human expertise revisited.

This study aims to compare the diagnostic abilities of humans in wound image assessment with those of an AI-based model, examine how "expertise" affects clinicians' diagnostic performance, and investigate the heterogeneity in clinical judgments. A total of 481 healthcare professionals completed a diagnostic task involving 30 chronic wound images with and without maceration. A convolutional neural network (CNN) classification model performed the same task. To predict human accuracy, participants' "expertise," ie, pertinent formal qualification, work experience, self-confidence, and wound focus, was analyzed in a regression analysis. Human interrater reliability was calculated. Human participants achieved an average accuracy of 79.3% and a maximum accuracy of 85% in the formally qualified group. Achieving 90% accuracy, the CNN performed better but not significantly. Pertinent formal qualification (β = 0.083, P < .001) and diagnostic self-confidence (β = 0.015, P = .002) significantly predicted human accuracy, while work experience and focus on wound care had no effect (R2 = 24.3%). Overall interrater reliability was "fair" (Kappa = 0.391). Among the "expertise"-related factors, only the qualification and self-confidence variables influenced diagnostic accuracy. These findings challenge previous assumptions about work experience or job titles defining "expertise" and influencing human diagnostic performance. This study offers guidance to future studies when comparing human expert and AI task performance. However, to explain human diagnostic accuracy, "expertise" may only serve as one correlate, while additional factors need further research.

Read full abstract
  • Journal IconJournal of the American Medical Informatics Association : JAMIA
  • Publication Date IconJul 16, 2025
  • Author Icon Florian Kücking + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

The Alchemist, the Scientist, and the Robot: Exploring the Potential of Human-AI Symbiosis in Self-Driving Polymer Laboratories.

Polymer chemistry research has progressed through three methodological eras: the alchemist's intuitive trial-and-error, the scientist's rule-based design, and the robot's algorithm-guided automation. While approaches combining combinatorial chemistry with statistical design of experiments offer a systematic approach to polymer discovery, they struggle with complex design spaces, avoid human biases, and scale up. In response, the discipline has adopted automation and artificial intelligence (AI), culminating in self-driving laboratories (SDLs), integrating high-throughput experimentation into closed-loop, AI-assisted design-build-test-learn cycles, enabling the rapid exploration of chemical spaces. However, while SDLs address throughput and complexity challenges, they introduce new forms of the original problems: algorithmic biases replace human biases, data sparsity creates constraints on design space navigation, and black-box AI models create transparency issues, complicating interpretation. These challenges emphasize a critical point: complete algorithmic autonomy is inadequate without human involvement. Human intuition, ethical judgment, and domain expertise are crucial for establishing research objectives, identifying anomalies, and ensuring adherence to ethical constraints. This perspective supports a hybrid model grounded in symbiotic autonomy, where adaptive collaboration between humans and AI enhances trust, creativity, and reproducibility. By incorporating human reasoning into adaptive AI-assisted SDL workflows, next-generation autonomous polymer discovery will be not only faster but also wiser.

Read full abstract
  • Journal IconMacromolecular rapid communications
  • Publication Date IconJul 16, 2025
  • Author Icon Bahar Dadfar + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge

We propose a data-efficient framework for automating sowing operations by agricultural robots in densely mixed polyculture environments. This study addresses the challenge of enabling robots to identify suitable sowing positions with minimal labeled data by integrating image-based field sensing with expert agricultural knowledge. We collected 84 RGB-depth images from seven field sites, labeled by synecological farming practitioners of varying proficiency levels, and trained a regression model to estimate optimal sowing positions and seeding quantities. The model’s predictions were comparable to those of intermediate-to-advanced practitioners across diverse field conditions. To implement this estimation in practice, we mounted a Kinect v2 sensor on a robot arm and integrated its 3D spatial data with axis-specific movement control. We then applied a trajectory optimization algorithm based on the traveling salesman problem to generate efficient sowing paths. Simulated trials incorporating both computation and robotic control times showed that our method reduced sowing operation time by 51% compared to random planning. These findings highlight the potential of interpretable, low-data machine learning models for rapid adaptation to complex agroecological systems and demonstrate a practical approach to combining structured human expertise with sensor-based automation in biodiverse farming environments.

Read full abstract
  • Journal IconAgriculture
  • Publication Date IconJul 16, 2025
  • Author Icon Shuntaro Aotake + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Design of Intelligent Educational Mobile Apps with an Original Dataset for Chinese-Portuguese Translators

Translation remains a vital process in many culturally diverse countries. Despite significant advances in artificial intelligence (AI) technology, machine translation currently lacks the ability to fully replace human expertise, requiring continued human intervention and review in translation workflows. This article introduces an innovative mobile education application (app) designed to train translators, with a particular focus on Chinese-Portuguese translation. This app uses a set of practice data, Chinese-Portuguese translation exercise corpus (CPTEC), developed by our corpus team to autonomously assess and identify translation quality defects, thereby promoting skill improvement. We also propose a novel hybrid grade system based on different translation quality assessment (TQA) dimensions to automatically evaluate translations by imitating humans. In addition, it demonstrates the design of challenging exercises within a mobile app to reinforce translation proficiency. To optimize the functionality of the mobile app, we use a large language model (LLM) to validate the solution, ensure that it learns the training material provided and track its performance. Subsequent experimental results show that the fine-tuned LLM improves on multiple dimensions (including accuracy, fidelity, fluency, readability, acceptability, and usability) compared to the initial state, confirming the effectiveness of the developed practice data in improving translation performance. To promote access to research, the practice data (CPTEC) will be distributed within the relevant AI community, to inspire people to create innovative software applications to support translators.

Read full abstract
  • Journal IconForum for Linguistic Studies
  • Publication Date IconJul 16, 2025
  • Author Icon Lap Man Hoi + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Automated radiotherapy treatment planning guided by GPT-4Vision

Abstract Objective: Radiotherapy treatment planning is a time-consuming and potentially subjective process that requires the iterative adjustment of model parameters to balance multiple conflicting objectives. Recent advancements in frontier Artificial Intelligence (AI) models offer promising avenues for addressing the challenges in planning and clinical decision-making. This study introduces GPT-RadPlan, an automated treatment planning framework that integrates radiation oncology knowledge with the reasoning capabilities of large multi-modal models, such as GPT-4Vision (GPT-4V) from OpenAI. &amp;#xD;&amp;#xD;Approach: Via in-context learning, we incorporate clinical requirements and a few (3 in our experiments) approved clinical plans with their optimization settings, enabling GPT-4V to acquire treatment planning domain knowledge. The resulting GPT-RadPlan system is integrated into our in-house inverse treatment planning system through an application programming interface (API). For a given patient, GPT-RadPlan acts as both plan evaluator and planner, first assessing dose distributions and dose-volume histograms (DVHs), and then providing ``textual feedback'' on how to improve the plan to match the physician's requirements. In this manner, GPT-RadPlan iteratively refines the plan by adjusting planning parameters, such as weights and dose objectives, based on its suggestions. &amp;#xD;&amp;#xD;Main results: The efficacy of the automated planning system is showcased across 17 prostate cancer and 13 head &amp; neck cancer VMAT plans with prescribed doses of 70.2 Gy and 72 Gy, respectively, where we compared GPT-RadPlan results to clinical plans produced by human experts. In all cases, GPT-RadPlan either outperformed or matched the clinical plans, demonstrating superior target coverage and reducing organ-at-risk doses by 5 Gy on average. &amp;#xD;&amp;#xD;Significance: Consistently satisfying the dose-volume objectives in the clinical protocol, GPT-RadPlan represents the first multimodal large language model agent that mimics the behaviors of human planners in radiation oncology clinics, achieving promising results in automating the treatment planning process without the need for additional training.

Read full abstract
  • Journal IconPhysics in Medicine &amp; Biology
  • Publication Date IconJul 15, 2025
  • Author Icon Sheng Liu + 12
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Beyond Forecasts- The Rise of Bionic Demand Planning in Digitally Enabled Supply Chains

In an increasingly complex and volatile business environment more popularly VUCA world, companies are turning to Bionic Demand Planning as a strategic enabler to enhance supply chain performance. This approach amalgamates the analytical power of Artificial Intelligence (AI) and Machine Learning (ML) with human expertise, enabling more accurate, agile, and resilient demand forecasting. This review synthesizes key academic research and leading industry perspectives to present a comprehensive overview of Bionic Demand Planning. It examines how human-machine collaboration improves forecasting outcomes, discovers emerging best practices, and highlights both the benefits and challenges associated with its implementation. By collating insights from scholarly studies and practical industry reports, this paper offers a roadmap for organizations seeking to adopt Bionic Demand Planning as a competitive advantage.

Read full abstract
  • Journal IconInternational Research Journal on Advanced Engineering and Management (IRJAEM)
  • Publication Date IconJul 15, 2025
  • Author Icon Sameera Sultana Saif Shaikh + 1
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Automated novelty evaluation of academic paper: A collaborative approach integrating human and large language model knowledge

Abstract Novelty is a crucial criterion in the peer‐review process for evaluating academic papers. Traditionally, it is judged by experts or measured by unique reference combinations. Both methods have limitations: experts have limited knowledge, and the effectiveness of the combination method is uncertain. Moreover, it is unclear if unique citations truly measure novelty. The large language model (LLM) possesses a wealth of knowledge, while human experts possess judgment abilities that the LLM does not possess. Therefore, our research integrates the knowledge and abilities of LLM and human experts to address the limitations of novelty assessment. The most common novelty in academic papers is the introduction of new methods. In this paper, we propose leveraging human knowledge and LLM to assist pre‐trained language models (PLMs, e.g., BERT, etc.) in predicting the method novelty of papers. Specifically, we extract sentences related to the novelty of the academic paper from peer‐review reports and use LLM to summarize the methodology section of the academic paper, which are then used to fine‐tune PLMs. In addition, we have designed a text‐guided fusion module with novel Sparse‐Attention to better integrate human and LLM knowledge. We compared the method we proposed with a large number of baselines. Extensive experiments demonstrate that our method achieves superior performance.

Read full abstract
  • Journal IconJournal of the Association for Information Science and Technology
  • Publication Date IconJul 15, 2025
  • Author Icon Wenqing Wu + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Mitigating recruitment and selection challenges through the utilization of AI

Human resource management (HRM) is a crucial component of an organization’s management and aims to enhance employee efficiency and an organization’s competitiveness. As organizations advance and grow, recruitment and selection decisions become more critical. The thinking of human resources practitioners and experts must be transformed to accommodate the current workplace. Practitioners and experts must ensure that qualified candidates are attracted to the organization at the right time to fill open positions. Traditional HRM methods are insufficient for the progressively more complicated HRM challenges. Recruitment and selection, as one HRM component, is a process with numerous challenges. This study examines how the use of AI technologies can assist in mitigating recruitment and selection challenges. Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and soliciting data from databases such as Web of Science, Google Scholar, and Science Direct, a systematic literature review analysis was conducted to investigate the human resources challenges in recruitment and selection and explore how AI can moderate these challenges. The analysis indicated several challenges, including high costs, bribery and corruption, political interference, inadequate job descriptions, nepotism and favoritism, and lengthy recruitment and selection processes. Recommendations of the study suggest that the accuracy and efficiency of recruitment and selection can be enhanced by involving AI technologies, which can assist in lowering the risks and expenses associated with recruitment and selection.

Read full abstract
  • Journal IconInternational Journal of Research in Business and Social Science (2147- 4478)
  • Publication Date IconJul 15, 2025
  • Author Icon Simangele Mkhize + 1
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Automated Test Generation and Marking Using LLMs

This paper presents an innovative exam-creation and grading system powered by advanced natural language processing and local large language models. The system automatically generates clear, grammatically accurate questions from both short passages and longer documents across different languages, supports multiple formats and difficulty levels, and ensures semantic diversity while minimizing redundancy, thus maximizing the percentage of the material that is covered in the generated exam paper. For grading, it employs a semantic-similarity model to evaluate essays and open-ended responses, awards partial credit, and mitigates bias from phrasing or syntax via named entity recognition. A major advantage of the proposed approach is its ability to run entirely on standard personal computers, without specialized artificial intelligence hardware, promoting privacy and exam security while maintaining low operational and maintenance costs. Moreover, its modular architecture allows the seamless swapping of models with minimal intervention, ensuring adaptability and the easy integration of future improvements. A requirements–compliance evaluation, combined with established performance metrics, was used to review and compare two popular multilingual LLMs and monolingual alternatives, demonstrating the system’s effectiveness and flexibility. The experimental results show that the system achieves a grading accuracy within a 17% normalized error margin compared to that of human experts, with generated questions reaching up to 89.5% semantic similarity to source content. The full exam generation and grading pipeline runs efficiently on consumer-grade hardware, with average inference times under 30 s.

Read full abstract
  • Journal IconElectronics
  • Publication Date IconJul 15, 2025
  • Author Icon Ioannis Papachristou + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

MO-SAM: Testing the reliability and limits of mine feature delineation using Segment Anything Model to democratize mine observation and research

The purpose of this paper is to leverage the growth of AI-enabled tools to support the democratization of mine observation (MO) research. Mining is essential to meet projected demand for renewable energy technologies crucial to global climate mitigation objectives, but all mining activities pose local and regional challenges to environmental sustainability. Such challenges can be mitigated by good governance, but unequal access among stakeholders to accurately interpreted satellite imagery can weaken good governance. Using readily available software—QGIS, and Segment Anything Model (SAM)—this paper develops and tests the reliability of MO-SAM, a new method to identify and delineate features within the spatially-explicit mine extent at a high level of detail. It focuses on dry tailings, waste dumps, and stockpiles in above-ground mining areas. While we intend for MO-SAM to be used generally, this study tested it on mining areas for energy-critical materials: lithium (Li), cobalt (Co), rare earth elements (REE), and platinum group elements (PGE), selected for their importance to the global transition to renewable energy. MO-SAM demonstrates generalizability through prompt engineering, but performance limitations were observed in imagery with complex mining landscape scenarios, including spatial variations in image morphology and boundary sharpness. Our analysis provides data-driven insights to support advances in the use of MO-SAM for analyzing and monitoring large-scale mining activities with greater speed than methods that rely on manual delineation, and with greater precision than practices that focus primarily on changes in the spatially-explicit mine extent. It also provides insights into the importance of multidisciplinary human expertise in designing processes for and assessing the accuracy of AI-assisted remote sensing image segmentation as well as in evaluating the significance of the land use and land cover changes identified. This has widespread potential to advance the multidisciplinary application of AI for scientific and public interest, particularly in research on global scale human-environment interactions such as industrial mining activities. This is methodologically significant because the potential and limitations of using large pre-trained image segmentation models such as SAM for analyzing remote sensing data is an emergent and underexplored issue. The results can help advance the utilization of large pre-trained segmentation models for remote sensing imagery analysis to support sustainability research and policy.

Read full abstract
  • Journal IconPLOS Sustainability and Transformation
  • Publication Date IconJul 15, 2025
  • Author Icon Qitong Wang + 9
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Performance of Large Language Models in Numerical Versus Semantic Medical Knowledge: Cross-Sectional Benchmarking Study on Evidence-Based Questions and Answers.

Clinical problem-solving requires processing of semantic medical knowledge, such as illness scripts, and numerical medical knowledge of diagnostic tests for evidence-based decision-making. As large language models (LLMs) show promising results in many aspects of language-based clinical practice, their ability to generate nonlanguage evidence-based answers to clinical questions is inherently limited by tokenization. This study aimed to evaluate LLMs' performance on two question types: numeric (correlating findings) and semantic (differentiating entities), while examining differences within and between LLMs in medical aspects and comparing their performance to humans. To generate straightforward multichoice questions and answers (Q and As) based on evidence-based medicine (EBM), we used a comprehensive medical knowledge graph (containing data from more than 50,000 peer-reviewed studies) and created the EBM questions and answers (EBMQAs). EBMQA comprises 105,222 Q and As, categorized by medical topics (eg, medical disciplines) and nonmedical topics (eg, question length), and classified into numerical or semantic types. We benchmarked a dataset of 24,000 Q and As on two state-of-the-art LLMs, GPT-4 (OpenAI) and Claude 3 Opus (Anthropic). We evaluated the LLM's accuracy on semantic and numerical question types and according to sublabeled topics. In addition, we examined the question-answering rate of LLMs by enabling them to choose to abstain from responding to questions. For validation, we compared the results for 100 unrelated numerical EBMQA questions between six human medical experts and the two language models. In an analysis of 24,542 Q and As, Claude 3 and GPT-4 performed better on semantic Q and As (68.7%, n=1593 and 68.4%, n=1709), respectively. Then on numerical Q and As (61.3%, n=8583 and 56.7%, n=12,038), respectively, with Claude 3 outperforming GPT-4 in numeric accuracy (P<.001). A median accuracy gap of 7% (IQR 5%-10%) was observed between the best and worst sublabels per topic, with different LLMs excelling in different sublabels. Focusing on Medical Discipline sublabels, Claude 3 performed well in neoplastic disorders but struggled with genitourinary disorders (69%, n=676 vs 58%, n=464; P<.0001), while GPT-4 excelled in cardiovascular disorders but struggled with neoplastic disorders (60%, n=1076 vs 53%, n=704; P=.0002). Furthermore, humans (82.3%, n=82.3) surpassed both Claude 3 (64.3%, n=64.3; P<.001) and GPT-4 (55.8%, n=55.8; P<.001) in the validation test. Spearman correlation between question-answering and accuracy rate in both Claude 3 and GPT-4 was insignificant (ρ=0.12, P=.69; ρ=0.43, P=.13). Both LLMs excelled more in semantic than numerical Q and As, with Claude 3 surpassing GPT-4 in numerical Q and As. However, both LLMs showed inter- and intramodel gaps in different medical aspects and remained inferior to humans. In addition, their ability to respond or abstain from answering a question does not reliably predict how accurately they perform when they do attempt to answer questions. Thus, their medical advice should be addressed carefully.

Read full abstract
  • Journal IconJournal of medical Internet research
  • Publication Date IconJul 14, 2025
  • Author Icon Eden Avnat + 14
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Large language models in medical education: a comparative cross-platform evaluation in answering histological questions

ABSTRACT Large language models (LLMs) have shown promising capabilities across medical disciplines, yet their performance in basic medical sciences remains incompletely characterized. Medical histology, requiring factual knowledge and interpretative skills, provides a unique domain for evaluating AI capabilities in medical education. To evaluate and compare the performance of five current LLMs: GPT-4.1, Claude 3.7 Sonnet, Gemini 2.0 Flash, Copilot, and DeepSeek R1 on correctly answering medical histology multiple choice questions (MCQs). This cross-sectional comparative study used 200 USMLE-style histology MCQs across 20 topics. Each LLM completed all the questions in three separate attempts. Performance metrics included accuracy rates, test-retest reliability (ICC), and topic-specific analysis. Statistical analysis employed ANOVA with post-hoc Tukey’s tests and two-way mixed ANOVA for system-topic interactions. All LLMs achieved exceptionally high accuracy (Mean 91.1%, SD 7.2). Gemini performed best (92.0%), followed by Claude (91.5%), Copilot (91.0%), GPT-4 (90.8%), and DeepSeek (90.3%), with no significant differences between systems (p > 0.05). Claude showed the highest reliability (ICC = 0.931), followed by GPT-4 (ICC = 0.882). Complete accuracy and reproducibility (100%) were detected in Histological Methods, Blood and Hemopoiesis, and Circulatory System, while Muscle tissue (76.0%) and Lymphoid System (84.7%) presented the greatest challenges. LLMs demonstrate exceptional accuracy and reliability in answering histological MCQs, significantly outperforming other medical disciplines. Minimal inter-system variability suggests technological maturity, though topic-specific challenges and reliability concerns indicate the continued need for human expertise. These findings reflect rapid AI advancement and identify histology as particularly suitable for AI-assisted medical education. Clinical trial number: The clinical trial number is not pertinent to this study as it does not involve medicinal products or therapeutic interventions.

Read full abstract
  • Journal IconMedical Education Online
  • Publication Date IconJul 12, 2025
  • Author Icon Volodymyr Mavrych + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

AI's ability to interpret unlabeled anatomy images and supplement educational research as an AI rater.

Evidence suggests custom chatbots are superior to commercial generative artificial intelligence (GenAI) systems for text-based anatomy content inquiries. This study evaluates ChatGPT-4o's and Claude 3.5 Sonnet's capabilities to interpret unlabeled anatomical images. Secondarily, ChatGPT o1-preview was evaluated as an AI rater to grade AI-generated outputs using a rubric and was compared against human raters. Anatomical images (five musculoskeletal, five thoracic) representing diverse image-based media (e.g., illustrations, photographs, MRI) were annotated with identification markers (e.g., arrows, circles) and uploaded to each GenAI system for interpretation. Forty-five prompts (i.e., 15 first-order, 15 second-order, and 15 third-order questions) with associated images were submitted to both GenAI systems across two timepoints. Responses were graded by anatomy experts for factual accuracy and superfluity (the presence of excessive wording) on a three-point Likert scale. ChatGPT o1-preview was tested for agreement against human anatomy experts to determine its usefulness as an AI rater. Statistical analyses included inter-rater agreement, hierarchical linear modeling, and test-retest reliability. ChatGPT-4o's factual accuracy score across 45 outputs was 68.0% compared to Claude 3.5 Sonnet's score of 61.5% (p = 0.319). As an AI rater, ChatGPT o1-preview showed moderate to substantial agreement with human raters (Cohen's kappa = 0.545-0.755) for evaluating factual accuracy according to a rubric of textbook answers. Further improvements and evaluations are needed before commercial GenAI systems can be used as credible student resources in anatomy education. Similarly, ChatGPT o1-preview demonstrates promise as an AI assistant for educational research, though further investigation is warranted.

Read full abstract
  • Journal IconAnatomical sciences education
  • Publication Date IconJul 11, 2025
  • Author Icon Lord J Hyeamang + 9
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Technological Advances in Healthcare and Medical Deontology: Towards a Hybrid Clinical Methodology

The rapid advancements in healthcare technologies are reshaping the medical landscape, prompting a reconsideration of clinical methodologies and their ethical foundations. This article explores the need for an updated approach to medical deontology, emphasizing the transition from traditional practices to a hybrid clinical methodology that integrates both human expertise and technological innovations. With the increasing use of Artificial Intelligence, data analytics, and advanced medical tools, healthcare professionals are presented with new ethical and professional challenges. These challenges demand a reevaluation of professional responsibility, highlighting the importance of scientific evidence in decision-making while mitigating the influence of economic and ideological factors. By framing medical practice within a systemic and integrated perspective, this article proposes a model that moves beyond the reductionist and anti-reductionist dualism, fostering a more realistic understanding of healthcare. This new paradigm necessitates the evolution of the Medical Code of Ethics, integrating the concept of “medical intelligence” to address the complexities of data management and its ethical implications. The article ultimately advocates for a dynamic and adaptive approach that aligns medical practice with emerging technologies, ensuring that patient care remains person-centered and ethically grounded in a rapidly changing healthcare environment.

Read full abstract
  • Journal IconHealthcare
  • Publication Date IconJul 10, 2025
  • Author Icon Vittoradolfo Tambone + 11
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A deep learning software tool for automated sleep staging in rats via single channel EEG

Poor quality and poor duration of sleep have been associated with cognitive decline, diseases, and disorders. Therefore, sleep studies are imperative to recapitulate phenotypes associated with poor sleep quality and uncover mechanisms contributing to psychopathology. Classification of sleep stages, vigilance state bout durations, and number of transitions amongst vigilance states serves as a proxy for evaluating sleep quality in preclinical studies. Currently, the gold standard for sleep staging is expert human inspection of polysomnography (PSG) obtained from preclinical rodent models and this approach is immensely time consuming. To accelerate the analysis, we developed a deep-learning-based software tool for automated sleep stage classification in rats. This study aimed to develop an automated method for classifying three sleep stages in rats (REM/paradoxical sleep, NREM/slow-wave sleep, and wakefulness) using a deep learning approach based on single-channel EEG data. Single-channel EEG data were acquired from 16 rats, each undergoing two 24 h recording sessions. The data were labeled by human experts in 10 s epochs corresponding to three stages: REM/paradoxical sleep, NREM/slow-wave sleep, and wakefulness. A deep neural network (DNN) model was designed and trained to classify these stages using the raw temporal data from the EEG. The DNN achieved strong performance in predicting the three sleep stages, with an average F1 score of 87.6% over a cross-validated test set. The algorithm was able to predict key parameters of sleep architecture, including total bout duration, average bout duration, and number of bouts, with significant accuracy. Our deep learning model effectively automates the classification of sleep stages using single-channel EEG data in rats, reducing the need for labor-intensive manual annotation. This tool enables high-throughput sleep studies and may accelerate research into sleep-related pathologies. Furthermore, we provide over 700 h of expert-scored sleep data, available for public use in future research studies.

Read full abstract
  • Journal IconNPP—Digital Psychiatry and Neuroscience
  • Publication Date IconJul 10, 2025
  • Author Icon Andrew Smith + 5
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Abstract A060: OncoMindPro: An AI-augmented assistant to oncologists

Abstract Background: Medical oncologists are facing increasing challenges from accurate diagnosis throughout precise treatment. Oncologists usually need to review incredible amount of structured and unstructured data including patient history of present illness, pathological diagnosis, imaging reports, genomic test, and clinical laboratory results of a given patient for decision-making of accurate diagnosis and personalized treatment. The purpose of this study was to build an artificial intelligent system that augments the massive complex data and assists oncologists for decision-making of precise diagnosis and treatment options. Methods: This retrospective study involved 2036 patients with advanced cancer. Each case was evaluated using OncoMindPro along with 4 different large multimodal models (LMMs) (OpenAI, Grok3 API, BioMedLM, andDeepSeek R1) and oncologists. OncoMindPro was built on robust multimodal medical data fusion architecture and curated knowledgebase using LMMs. The augmented AI process generates patient medical records (PMR) with precisely summarized clinical and diagnostic indications. Qualitative analysis of the overall quality of AI-generated PMR along with 4 different LLMs and oncologists was conducted using the Kappa analysis. Furthermore, OncoMindPro and other 4 LLMs and oncologists were used to identify personalized treatment options. Five board-certified oncologists evaluated the overall quality of AI-generated PMRs using a 4-point scale and rated the likelihood of a treatment option coming from an LLM on a scale from 0 to 10 (0, extremely unlikely; 10, extremely likely) and decided whether the treatment option was clinically useful. Number of treatment options, precision, recall, F1 score of LLMs compared with expert oncologists and usefulness of recommendations. Results: For AI-generated PMR, there were no significant differences in qualitative scores between oncologists and OncoMindPro(p &amp;gt; 0.05). However, the qualitative scores of the other 4 LMMs were significantly lower than those of oncologists (p &amp;lt; 0.05). For 2036 cancer patients, a median(IQR) number of 4.0(4.0-4.0) compared with 4.2(3.8-5.1), 7.1(4.2-8.6), 8.7(6.3-9.8), 10.3(7.4-12.7), and 11.3(10.1-15.4) treatment options each was identified by the human expert and OncolMindPro and other 4 LLMs, respectively. When considering the expert as a criterion standard, 4 other LLMs-generated treatment options reached F1 scores of 0.06, 0.13, 0.18, and 0.21 across all patients combined. Treatment options from OncoMindPro allowed a precision of 0.36 and a recall of 0.38 for an F1 score of 0.37. Conclusions: We built OncoMindPro as a novel AI-driven smart healthcare by successful implementation of multimodal fusion and LMMs in precision oncology. The AI capabilities of OncoMindPro help accurately match optimal treatment options to a given patient, and provide prioritized treatment recommendations to oncologists. The overall quality of patient medical record and treatment options recommend by OncoMindPro were significantly surpassing the performance of other LMMs. Citation Format: Samuel D. Ding, Xinjia Ding, Shikai Wu, Yan Ding, Qin Huang. OncoMindPro: An AI-augmented assistant to oncologists [abstract]. In: Proceedings of the AACR Special Conference in Cancer Research: Artificial Intelligence and Machine Learning; 2025 Jul 10-12; Montreal, QC, Canada. Philadelphia (PA): AACR; Clin Cancer Res 2025;31(13_Suppl):Abstract nr A060.

Read full abstract
  • Journal IconClinical Cancer Research
  • Publication Date IconJul 10, 2025
  • Author Icon Samuel D Ding + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Empowering Human-AI Collaboration: Enterprise Technology Platforms and Human Expertise Synergy in Healthcare, Finance, and Scientific Research

The relationship between professionals and intelligent machines has taken an unexpected turn. Rather than the wholesale job displacement many predicted, a more nuanced reality emerged—one where artificial intelligence becomes a collaborative partner in complex decision-making. A multinational technology corporation exemplifies this shift through platforms that transform how doctors diagnose diseases, bankers assess risk, and scientists make discoveries. Real-world deployments tell compelling stories. Radiologists working with AI catch tumors too small for the human eye alone, yet clinical judgment determines treatment paths. Trading desks employ algorithms that process market data in milliseconds, while portfolio managers apply wisdom no machine possesses about human psychology and market irrationality. Experimental laboratories accelerate discovery through computational analysis of massive datasets, but breakthrough insights still require human creativity and intuition. This technology corporation took a different path when designing its tools. Azure AI offers massive computing power but lets users decide how to apply it. Copilot understands plain English requests rather than forcing people to learn programming languages. Power Platform turns business experts into app developers without writing code. Each choice reflects the same bet: professionals know their work better than any algorithm. The technology should adapt to them, not vice versa. Ethics weren't an afterthought either—built-in safeguards prevent discriminatory outcomes, protect privacy, and explain AI decisions in terms humans understand. Early results validate the approach. Healthcare institutions report improvement in diagnostic accuracy and reduction in physician burnout. Financial firms detect fraud patterns more effectively while maintaining customer relationships that require human empathy. Investigation teams tackle previously impossible problems by combining computational power with scientific creativity. Challenges remain substantial. Privacy regulations constrain healthcare applications. Financial compliance grows more complex as AI systems require new oversight frameworks. Scientific reproducibility demands careful documentation of algorithmic processes. Yet organizations navigating these challenges successfully demonstrate that human-AI collaboration represents not just a technological shift but a fundamental reimagining of professional work itself. The most effective implementations recognize that optimal outcomes emerge when each partner—human and machine—contributes their distinctive strengths to solving problems neither could address alone.

Read full abstract
  • Journal IconJournal of Computer Science and Technology Studies
  • Publication Date IconJul 10, 2025
  • Author Icon Venkata Babu Mogili
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

TCMP-300: A Comprehensive Traditional Chinese Medicinal Plant Dataset for Plant Recognition

Traditional Chinese Medicinal Plants (TCMPs) are often used to prevent and treat diseases for the human body. Since various medicinal plants have different therapeutic effects, plant recognition has become an important topic. Traditional identification of medicinal plants mainly relies on human experts, which does not meet the increased requirements in clinical practice. Artificial Intelligence (AI) research for plant recognition faces challenges due to the lack of a comprehensive medicinal plant dataset. Therefore, we present a TCMP dataset that includes 52,089 images in 300 categories. Compared to the existing medicinal plant datasets, our dataset has more categories and fine-grained plant parts to facilitate comprehensive plant recognition. The plant images were collected through the Bing search engine and cleaned by a pretrained vision foundation model with human verification. We conduct technical validation by training several state-of-the-art image classification models with advanced data augmentation on the dataset, and achieve 89.64% accuracy. Our dataset promotes the development and validation of advanced AI models for robust and accurate plant recognition.

Read full abstract
  • Journal IconScientific Data
  • Publication Date IconJul 9, 2025
  • Author Icon Yanling Zhang + 7
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Critical AI literacy for applied linguistics and language education students

Abstract Following the generative artificial intelligence (GenAI) boom of the early 2020s, research in applied linguistics has become preoccupied with identifying how artificial intelligence (AI) and GenAI can be used effectively in research and education. As we emerge from our initial reactionary perspectives, there is an increased interest in delineating AI literacies so as to support learners who wish to engage with AI and GenAI as part of their learning process. This paper adds to this growing body of work, offering insight into critical AI literacies for applied linguistics and language education. Based on the critical grounded theory analysis of a focus group with Spanish students of applied linguistics, this paper teases apart the students’ technical understandings of AI, use of critical thinking when engaging with AI, awareness of the ethical concerns surrounding AI, and practical applications of AI. The discussions revealed a complex interaction of practical, ethical, and analytical considerations, emphasizing AI’s potential to augment but not replace human expertise. Ethical considerations were linked with critical thinking, reflecting a deep integration of moral and practical dimensions in student discussions. Our analysis seeks to inform current research that develops both frameworks and theoretical models for language education and applied linguistics education.

Read full abstract
  • Journal IconJournal of China Computer-Assisted Language Learning
  • Publication Date IconJul 9, 2025
  • Author Icon Pascual Pérez-Paredes + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers