Few-shot and interpretable agentic framework based on large language models for data-efficient plant phenotyping
Few-shot and interpretable agentic framework based on large language models for data-efficient plant phenotyping
- Research Article
- 10.3390/buildings15152710
- Jul 31, 2025
- Buildings
Environmental, social, and governance (ESG) evaluation has become increasingly critical for company sustainability assessments, especially for enterprises in the construction industry with a high environmental burden. However, existing methods face limitations in subjective evaluation, inconsistent ratings across agencies, and a lack of industry-specificity. To address these limitations, this study proposes a large language model (LLM)-based intelligent ESG evaluation model specifically designed for the construction enterprises in China. The model integrates three modules: (1) an ESG report information extraction module utilizing natural language processing and Chinese pre-trained language models to identify and classify ESG-relevant statements; (2) an ESG rating prediction module employing XGBoost regression with SHAP analysis to predict company ratings and quantify individual statement contributions; and (3) an ESG intelligent evaluation module combining knowledge graph construction with fine-tuned Qwen2.5 language models using Chain-of-Thought (CoT). Empirical validation demonstrates that the model achieves 93.33% accuracy in the ESG rating classification and an R2 score of 0.5312. SHAP analysis reveals that environmental factors contribute most significantly to rating predictions (38.7%), followed by governance (32.0%) and social dimensions (29.3%). The fine-tuned LLM integrated with knowledge graph shows improved evaluation consistency, achieving 65% accuracy compared to 53.33% for standalone LLM approaches, constituting a relative improvement of 21.88%. This study contributes to the ESG evaluation methodology by providing an objective, industry-specific, and interpretable framework that enhances rating consistency and provides actionable insights for enterprise sustainability improvement. This research provides guidance for automated and intelligent ESG evaluations for construction enterprises while addressing critical gaps in current ESG practices.
- Research Article
- 10.3390/bioengineering12111174
- Oct 28, 2025
- Bioengineering
This study introduces an explainable neuro-symbolic and large language model (LLM)-driven framework for intelligent interpretation of corneal topography and precision surgical decision support. In a prospective cohort of 20 eyes, comprehensive IOLMaster 700 reports were analyzed through a four-stage pipeline: (1) automated extraction of key parameters—including corneal curvature, pachymetry, and axial biometry; (2) mapping of these quantitative features onto a curated corneal disease and refractive-surgery knowledge graph; (3) Bayesian probabilistic inference to evaluate early keratoconus and surgical eligibility; and (4) explainable multi-model LLM reporting, employing DeepSeek and GPT-4.0, to generate bilingual physician- and patient-facing narratives. By transforming complex imaging data into transparent reasoning chains, the pipeline delivered case-level outputs within ~95 ± 12 s. When benchmarked against independent evaluations by two senior corneal specialists, the framework achieved 92 ± 4% sensitivity, 94 ± 5% specificity, 93 ± 4% accuracy, and an AUC of 0.95 ± 0.03 for early keratoconus detection, alongside an F1 score of 0.90 ± 0.04 for refractive surgery eligibility. The generated bilingual reports were rated ≥4.8/5 for logical clarity, clinical usefulness, and comprehensibility, with representative cases fully concordant with expert judgment. Comparative benchmarking against baseline CNN and ViT models demonstrated superior diagnostic accuracy (AUC = 0.95 ± 0.03 vs. 0.88 and 0.90, p < 0.05), confirming the added value of the neuro-symbolic reasoning layer. All analyses were executed on a workstation equipped with an NVIDIA RTX 4090 GPU and implemented in Python 3.10/PyTorch 2.2.1 for full reproducibility. By explicitly coupling symbolic medical knowledge with advanced language models and embedding explainable artificial intelligence (XAI) principles throughout data processing, reasoning, and reporting, this framework provides a transparent, rapid, and clinically actionable AI solution. The approach holds significant promise for improving early ectatic disease detection and supporting individualized refractive surgery planning in routine ophthalmic practice.
- Research Article
- 10.3390/e26040321
- Apr 6, 2024
- Entropy
In underground industries, practitioners frequently employ argots to communicate discreetly and evade surveillance by investigative agencies. Proposing an innovative approach using word vectors and large language models, we aim to decipher and understand the myriad of argots in these industries, providing crucial technical support for law enforcement to detect and combat illicit activities. Specifically, positional differences in semantic space distinguish argots, and pre-trained language models' corpora are crucial for interpreting them. Expanding on these concepts, the article assesses the semantic coherence of word vectors in the semantic space based on the concept of information entropy. Simultaneously, we devised a labeled argot dataset, MNGG, and developed an argot recognition framework named CSRMECT, along with an argot interpretation framework called LLMResolve. These frameworks leverage the MECT model, the large language model, prompt engineering, and the DBSCAN clustering algorithm. Experimental results demonstrate that the CSRMECT framework outperforms the current optimal model by 10% in terms of the F1 value for argot recognition on the MNGG dataset, while the LLMResolve framework achieves a 4% higher accuracy in interpretation compared to the current optimal model.The related experiments undertaken also indicate a potential correlation between vector information entropy and model performance.
- Research Article
- 10.3390/s25123822
- Jun 19, 2025
- Sensors (Basel, Switzerland)
Most existing fault diagnosis methods, although capable of extracting interpretable features such as attention-weighted fault-related frequencies, remain essentially black-box models that provide only classification results without transparent reasoning or diagnostic justification, limiting users' ability to understand and trust diagnostic outcomes. In this work, we present a novel, interpretable fault diagnosis framework that integrates spectral feature extraction with large language models (LLMs). Vibration signals are first transformed into spectral representations using Hilbert- and Fourier-based encoders to highlight key frequencies and amplitudes. A channel attention-augmented convolutional neural network provides an initial fault type prediction. Subsequently, structured information-including operating conditions, spectral features, and CNN outputs-is fed into a fine-tuned enhanced LLM, which delivers both an accurate diagnosis and a transparent reasoning process. Experiments demonstrate that our framework achieves high diagnostic performance while substantially improving interpretability, making advanced fault diagnosis accessible to non-expert users in industrial settings.
- Research Article
- 10.1371/journal.pone.0342256.r006
- Feb 9, 2026
- PLOS One
Cardiovascular diseases (CVDs) are leading causes of morbidity and mortality globally, with a growing burden in low- and middle-income countries such as Ethiopia. Early detection is limited by resource constraints, low screening uptake, and a lack of predictive tools tailored to local healthcare systems. This study presents an interpretable ensemble machine learning framework for predicting CVD risk via structured electronic medical record (EMR) data from public hospitals in Addis Ababa. We trained an XGBoost classifier on 20,960 anonymized records containing demographic, clinical, and physiological attributes. Preprocessing involves handling missing values, outlier capping, one-hot encoding, rare-category grouping, and dimensionality reduction. SHapley additive explanations (SHAPs) were used for feature attribution, and a large language model (Gemini) was used to translate SHAP outputs into plain-language narratives to enhance interpretability. The model achieved an accuracy of 0.99, with strong precision (0.99), recall (0.98), and F1-scores across both classes. SHAP analysis identified general_plan, history of present illness (HPI), musculoskeletal system (MSS) and diagnosis as key predictors. The integration of SHAP and LLMs provided transparent, clinician-friendly insights into model outputs, supporting adoption in resource-limited settings. This study demonstrates that combining ensemble learning with explainability techniques can yield highly accurate and interpretable CVD prediction models, offering potential for integration into clinical decision-support systems in Ethiopia.
- Research Article
- 10.1371/journal.pone.0342256
- Jan 1, 2026
- PloS one
Cardiovascular diseases (CVDs) are leading causes of morbidity and mortality globally, with a growing burden in low- and middle-income countries such as Ethiopia. Early detection is limited by resource constraints, low screening uptake, and a lack of predictive tools tailored to local healthcare systems. This study presents an interpretable ensemble machine learning framework for predicting CVD risk via structured electronic medical record (EMR) data from public hospitals in Addis Ababa. We trained an XGBoost classifier on 20,960 anonymized records containing demographic, clinical, and physiological attributes. Preprocessing involves handling missing values, outlier capping, one-hot encoding, rare-category grouping, and dimensionality reduction. SHapley additive explanations (SHAPs) were used for feature attribution, and a large language model (Gemini) was used to translate SHAP outputs into plain-language narratives to enhance interpretability. The model achieved an accuracy of 0.99, with strong precision (0.99), recall (0.98), and F1-scores across both classes. SHAP analysis identified general_plan, history of present illness (HPI), musculoskeletal system (MSS) and diagnosis as key predictors. The integration of SHAP and LLMs provided transparent, clinician-friendly insights into model outputs, supporting adoption in resource-limited settings. This study demonstrates that combining ensemble learning with explainability techniques can yield highly accurate and interpretable CVD prediction models, offering potential for integration into clinical decision-support systems in Ethiopia.
- Discussion
- 10.1111/cogs.13430
- Mar 1, 2024
- Cognitive science
This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information-based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid-20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The subsequent development of sociolinguistics and linguistic anthropology, especially since the 1970s, provided critical perspectives and empirical methods that both challenged and enriched this framework. This letter proposes that two pivotal concepts derived from this development, metapragmatic function and indexicality, offer a fruitful theoretical perspective for integrating the semantic, textual, and pragmatic, contextual dimensions of communication, an amalgamation that contemporary LLMs have yet to fully achieve. The author believes that contemporary cognitive science is at a crucial crossroads, where fostering interdisciplinary dialogues among computational linguistics, social linguistics and linguistic anthropology, and cognitive and social psychology is in particular imperative. Such collaboration is vital to bridge the computational, cognitive, and sociocultural aspects of human communication and human-AI interaction, especially in the era of large language and multimodal models and human-centric Artificial Intelligence (AI).
- Research Article
8
- 10.1287/ijds.2023.0007
- Apr 1, 2023
- INFORMS Journal on Data Science
How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
- Conference Article
105
- 10.1145/3510003.3510203
- May 21, 2022
Large pre-trained language models such as GPT-3 [10], Codex [11], and Google's language model [7] are now capable of generating code from natural language specifications of programmer intent. We view these developments with a mixture of optimism and caution. On the optimistic side, such large language models have the potential to improve productivity by providing an automated AI pair programmer for every programmer in the world. On the cautionary side, since these large language models do not understand program semantics, they offer no guarantees about quality of the suggested code. In this paper, we present an approach to augment these large language models with post-processing steps based on program analysis and synthesis techniques, that understand the syntax and semantics of programs. Further, we show that such techniques can make use of user feedback and improve with usage. We present our experiences from building and evaluating such a tool Jigsaw, targeted at synthesizing code for using Python Pandas API using multi-modal inputs. Our experience suggests that as these large language models evolve for synthesizing code from intent, Jigsaw has an important role to play in improving the accuracy of the systems.
- Research Article
8
- 10.1016/j.procs.2023.09.086
- Jan 1, 2023
- Procedia Computer Science
A Large and Diverse Arabic Corpus for Language Modeling
- Research Article
- 10.1108/ir-02-2025-0074
- Jul 29, 2025
- Industrial Robot: the international journal of robotics research and application
Purpose This study aims to explore the integration of large language models (LLMs) and vision-language models (VLMs) in robotics, highlighting their potential benefits and the safety challenges they introduce, including robustness issues, adversarial vulnerabilities, privacy concerns and ethical implications. Design/methodology/approach This survey conducts a comprehensive analysis of the safety risks associated with LLM- and VLM-powered robotic systems. The authors review existing literature, analyze key challenges, evaluate current mitigation strategies and propose future research directions. Findings The study identifies that ensuring the safety of LLM-/VLM-driven robots requires a multi-faceted approach. While current mitigation strategies address certain risks, gaps remain in real-time monitoring, adversarial robustness and ethical safeguards. Originality/value This study offers a structured and comprehensive overview of the safety challenges in LLM-/VLM-driven robotics. It contributes to ongoing discussions by integrating technical, ethical and regulatory perspectives to guide future advancements in safe and responsible artificial intelligence-driven robotics.
- Research Article
- 10.1038/s41698-025-00916-7
- May 23, 2025
- npj Precision Oncology
Large language models (LLMs) and large visual-language models (LVLMs) have exhibited near-human levels of knowledge, image comprehension, and reasoning abilities, and their performance has undergone evaluation in some healthcare domains. However, a systematic evaluation of their capabilities in cervical cytology screening has yet to be conducted. Here, we constructed CCBench, a benchmark dataset dedicated to the evaluation of LLMs and LVLMs in cervical cytology screening, and developed a GPT-based semi-automatic evaluation pipeline to assess the performance of six LLMs (GPT-4, Bard, Claude-2.0, LLaMa-2, Qwen-Max, and ERNIE-Bot-4.0) and five LVLMs (GPT-4V, Gemini, LLaVA, Qwen-VL, and ViLT) on this dataset. CCBench comprises 773 question-answer (QA) pairs and 420 visual-question-answer (VQA) triplets, making it the first dataset in cervical cytology to include both QA and VQA data. We found that LLMs and LVLMs demonstrate promising accuracy and specialization in cervical cytology screening. GPT-4 achieved the best performance on the QA dataset, with an accuracy of 70.5% for close-ended questions and average expert evaluation score of 6.9/10 for open-ended questions. On the VQA dataset, Gemini achieved the highest accuracy for close-ended questions at 67.8%, while GPT-4V attained the highest expert evaluation score of 6.1/10 for open-ended questions. Besides, LLMs and LVLMs revealed varying abilities in answering questions across different topics and difficulty levels. However, their performance remains inferior to the expertise exhibited by cytopathology professionals, and the risk of generating misinformation could lead to potential harm. Therefore, substantial improvements are required before these models can be reliably deployed in clinical practice.
- Research Article
- 10.1080/13658816.2025.2577252
- Nov 1, 2025
- International Journal of Geographical Information Science
The widespread use of online geoinformation platforms, such as Google Earth Engine (GEE), has produced numerous scripts. Extracting domain knowledge from these crowdsourced scripts supports understanding of geoprocessing workflows. Small Language Models (SLMs) are effective for semantic embedding but struggle with complex code; Large Language Models (LLMs) can summarize scripts, yet lack consistent geoscience terminology to express knowledge. In this paper, we propose Geo-CLASS, a knowledge extraction framework for geospatial analysis scripts that coordinates large and small language models. Specifically, we designed domain-specific schemas and a schema-aware prompt strategy to guide LLMs to generate and associate entity descriptions, and employed SLMs to standardize the outputs by mapping these descriptions to a constructed geoscience knowledge base. Experiments on 237 GEE scripts, selected from 295,943 scripts in total, demonstrated that our framework outperformed LLM baselines, including Llama-3, GPT-3.5 and GPT-4o. In comparison, the proposed framework improved accuracy in recognizing entities and relations by up to 31.9% and 12.0%, respectively. Ablation studies and performance analysis further confirmed the effectiveness of key components and the robustness of the framework. Geo-CLASS has the potential to enable the construction of geoprocessing modeling knowledge graphs, facilitate domain-specific reasoning and advance script generation via Retrieval-Augmented Generation (RAG).
- Research Article
3
- 10.1109/embc53108.2024.10782119
- Jul 15, 2024
- Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Deep phenotyping is the detailed description of patient signs and symptoms using concepts from an ontology. The deep phenotyping of the numerous physician notes in electronic health records requires high throughput methods. Over the past 30 years, progress toward making high-throughput phenotyping feasible. In this study, we demonstrate that a large language model and a hybrid NLP model (combining word vectors with a machine learning classifier) can perform high throughput phenotyping on physician notes with high accuracy. Large language models will likely emerge as the preferred method for high throughput deep phenotyping physician notes.Clinical relevance: Large language models will likely emerge as the dominant method for the high throughput phenotyping of signs and symptoms in physician notes.
- Research Article
59
- 10.1038/s41746-024-01024-9
- Feb 19, 2024
- NPJ Digital Medicine
Large language models (LLMs) have been shown to have significant potential in few-shot learning across various fields, even with minimal training data. However, their ability to generalize to unseen tasks in more complex fields, such as biology and medicine has yet to be fully evaluated. LLMs can offer a promising alternative approach for biological inference, particularly in cases where structured data and sample size are limited, by extracting prior knowledge from text corpora. Here we report our proposed few-shot learning approach, which uses LLMs to predict the synergy of drug pairs in rare tissues that lack structured data and features. Our experiments, which involved seven rare tissues from different cancer types, demonstrate that the LLM-based prediction model achieves significant accuracy with very few or zero samples. Our proposed model, the CancerGPT (with ~ 124M parameters), is comparable to the larger fine-tuned GPT-3 model (with ~ 175B parameters). Our research contributes to tackling drug pair synergy prediction in rare tissues with limited data, and also advancing the use of LLMs for biological and medical inference tasks.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.