Published in last 50 years
Articles published on Domain-specific Modeling
- New
- Research Article
- 10.1080/15323269.2025.2576895
- Nov 7, 2025
- Journal of Hospital Librarianship
- Mustafa Civelekler + 1 more
ABSTRACT This study evaluated the accuracy of four artificial intelligence models—ChatGPT, Copilot, DeepSeek, and Gemini—in generating PubMed citations for neuro-ophthalmology research. Using thirty-five standardized clinical paragraphs from The Review of Ophthalmology (4th edition), each model produced references formatted in AMA 11 style. Accuracy was determined by checking publication correctness, DOI matching, and citation relevance, with expert reviewers classifying outputs as Fully Cited, Partially Cited, or Not Cited. Inter-rater reliability was measured using Cohen’s kappa. Among the models, DeepSeek demonstrated the highest accuracy (75.0%), followed by Copilot (60.5%), ChatGPT (31.4%), and Gemini (3.0%). Common issues included DOI mismatches and irrelevant references, with Gemini generating 32 incorrect citations. Expert evaluation confirmed DeepSeek’s superiority, producing 15 fully cited references compared to Copilot’s 7 and 4 each for ChatGPT and Gemini. Reviewer agreement was substantial (κ = 0.70). The findings suggest that while domain-specific AI models can aid citation generation, frequent inaccuracies and hallucinations highlight the necessity of human oversight. A hybrid approach combining AI with expert review may provide more reliable outcomes.
- New
- Research Article
- 10.70315/uloap.ulete.2025.0204004
- Nov 5, 2025
- Universal Library of Engineering Technology
- Roman Ishchenko
The candidate–job matching (CJM) problem, central to high-skill recruitment in domains like technology, management, and finance, has seen rapid progress through machine learning (ML) since 2021. Modern systems move beyond simple keyword matching, leveraging advanced natural language processing (NLP), graph representations, and hybrid recommender methods. Transformer-based models (e.g. BERT and derivatives) now embed resumes and job descriptions into semantic spaces, enabling nuanced similarity comparisons. Graph neural networks (GNNs) capture rich relationships among candidates, skills, and jobs, often outperforming traditional neural models in screening tasks. Classical ML approaches (e.g. support vector machines, tree ensembles) remain useful for structured feature matching but are complemented by deep models for unstructured text. Recommender-system techniques – including collaborative filtering, content-based filtering, and hybrid designs – incorporate contextual signals (experience, industries, user behaviors) to improve personalization. Reviewed benchmarks report that fine-tuned transformers and GNNs can significantly boost ranking accuracy (e.g. ~15% NDCG improvements [1]) and screening sensitivity (e.g. GNN balanced accuracy 65.4% vs 55.0% for a plain MLP [2]). These gains come with challenges: neural approaches often act as black boxes, raising interpretability concerns, and large models incur high computational costs that demand scalable architectures (e.g. bi-encoder retrieval with cross-encoder re-ranking in multi-stage pipelines). Bias mitigation has become critical; domain-specific models have been shown to yield fairer outcomes than off-the-shelf large language models. This review surveys recent (2021–2025) peer-reviewed work on CJM, covering algorithmic approaches (SVMs, ensemble trees, Siamese and cross-encoder transformers, GNNs, and hybrid recommenders), model architectures, input representations (resumes, job text, skill ontologies), and evaluation methods. We synthesize experimental findings from academic studies, discussing strengths and limitations of each approach, including accuracy, robustness, interpretability, and fairness. Finally, we highlight open challenges and directions for making CJM more transparent and equitable while maintaining scalability in practice.
- New
- Research Article
- 10.3390/philosophies10060122
- Nov 5, 2025
- Philosophies
- Jonah Y C Hsu
This paper presents a methodological framework, Tonal Isomorphism (TI), derived from Tonal Meta-Ontology (TMO), focusing on operational protocols rather than ontological foundations. Tonal Isomorphism is framed as a meta-protocol rather than a metaphysical doctrine: its purpose is to provide a transferable logic that bridges disciplinary silos. We argue that knowledge breakthroughs can emerge not through trial-and-error experimentation alone, but through the isomorphic translation of tonal structures into domain-specific models. The methodology is demonstrated through three key contributions: (1) the Operationalization of Metaphysics, where tonal principles are expressed in executable forms such as the ToneWarp Equation and integrity-preserving responsibility chains; (2) the Unified Generative Field, a cross-domain modeling scaffold applicable to contexts ranging from arithmetic closure to digital trust protocols; and (3) the Generative Proof, which positions the methodology itself as a living demonstration of its claims, resistant to external mimicry. In an era defined by AI’s capacity for replication and simulation, Tonal Isomorphism offers a framework for knowledge generation where truth is not fixed discovery but a defensible, continuously enacted act of creation.
- New
- Research Article
- 10.3390/info16110957
- Nov 4, 2025
- Information
- Aristeidis Karras + 5 more
This paper presents a systematic review of research (2020–2025) on the role of Large Language Models (LLMs) in cybersecurity, with emphasis on their integration into Big Data infrastructures. Based on a curated corpus of 235 peer-reviewed studies, this review synthesizes evidence across multiple domains to evaluate how models such as GPT-4, BERT, and domain-specific variants support threat detection, incident response, vulnerability assessment, and cyber threat intelligence. The findings confirm that LLMs, particularly when coupled with scalable Big Data pipelines, improve detection accuracy and reduce response latency compared with traditional approaches. However, challenges persist, including adversarial susceptibility, risks of data leakage, computational overhead, and limited transparency. The contribution of this study lies in consolidating fragmented research into a unified taxonomy, identifying sector-specific gaps, and outlining future research priorities: enhancing robustness, mitigating bias, advancing explainability, developing domain-specific models, and optimizing distributed integration. In doing so, this review provides a structured foundation for both academic inquiry and practical adoption of LLM-enabled cyberdefense strategies. Last search: 30 April 2025; methods followed: PRISMA-2020; risk of bias was assessed; random-effects syntheses were conducted.
- New
- Research Article
- 10.1088/2632-2153/ae1acd
- Nov 3, 2025
- Machine Learning: Science and Technology
- Viet Anh Nguyen + 2 more
Abstract Unsupervised pre-training on vast amounts of graph data is critical in real-world applications wherein labeled data is limited, such as molecule properties prediction or materials science. Existing approaches pre-train models for specific graph domains, neglecting the inherent connections within networks. This limits their ability to transfer knowledge to various supervised tasks. In this work, we propose a novel pre-training strategy on graphs that focuses on modeling their multi-resolution structural information, allowing us to capture global information of the whole graph while preserving local structures around its nodes. We extend the work of Graph \textbf{Wave}let \textbf{P}ositional \textbf{E}ncoding (WavePE) from \citet{10.1063/5.0152833} by pretraining a \textbf{H}igh-\textbf{O}rder \textbf{P}ermutation-\textbf{E}quivariant Autoencoder (HOPE-WavePE) to reconstruct node connectivities from their multi-resolution wavelet signals. Since our approach relies solely on the graph structure, it is domain-agnostic and adaptable to datasets from various domains, therefore paving the way for developing general graph structure encoders and graph foundation models. We theoretically demonstrate that for $k$ given resolutions, the width required for the autoencoder to learn arbitrarily long-range information is $O\left(n^{1/k}r^{1+1/k}\epsilon^{-1/k}\right)$ where $n,r$ denote the number of nodes and the rank of normalized Laplacian, respectively, and $\epsilon$ is the error tolerance defined by the Frobenius norm. We also evaluate HOPE-WavePE on graph-level prediction tasks of different areas and show its superiority compared to other methods. Our source code is publicly available at \url{https://github.com/HySonLab/WaveletPE}.
- New
- Research Article
- 10.5614/itbj.ict.res.appl.2025.19.1.4
- Nov 3, 2025
- Journal of ICT Research and Applications
- Mohamed Yassine El Amrani + 3 more
Large language models (LLMs) have undergone rapid evolution and are highly effective in tasks such as text generation, question answering, and context-driven analysis. However, the unique requirements of Islamic studies, where textual authenticity, diverse jurisprudential interpretations, and deep semantic nuances are critical, present challenges for general LLMs. This article reviews the evolution of neural language models by comparing the historical progression of general LLMs with emerging Islamic-specific LLMs. We discuss the technical foundations of modern Transformer architectures and examine how recent advancements, such as GPT-4, DeepSeek, and Mistral, have expanded LLM capabilities. The paper also highlights the limitations of standard evaluation metrics like perplexity and BLEU in capturing doctrinal, ethical, and interpretative accuracy. To address these gaps, we propose specialized evaluation metrics to assess doctrinal correctness, internal consistency, and overall reliability. Finally, we outline a research roadmap aimed at developing robust, ethically aligned, and jurisprudentially precise Islamic LLMs.
- New
- Research Article
- 10.1016/j.jval.2025.04.2167
- Nov 1, 2025
- Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research
- Rachael L Fleurence + 8 more
A Taxonomy of Generative Artificial Intelligence in Health Economics and Outcomes Research: An ISPOR Working Group Report.
- New
- Research Article
- 10.1016/j.jss.2025.112694
- Nov 1, 2025
- Journal of Systems and Software
- Manouchehr Zadahmad Jafarlou + 1 more
Domain-specific conflict resolution and model merge
- New
- Research Article
- 10.1016/j.cma.2025.118310
- Nov 1, 2025
- Computer Methods in Applied Mechanics and Engineering
- Mian Xiao + 2 more
Geometric learning for computational mechanics Part IV: Efficient mesh-based plasticity from a domain-specific foundation model
- New
- Research Article
- 10.1007/s00266-025-05393-8
- Oct 31, 2025
- Aesthetic plastic surgery
- Hengqing Cui + 5 more
Large language models (LLMs) have demonstrated potential in various medical fields. However, their application in aesthetic plastic surgery remains largely unexplored, particularly in clinical decision support and patient consultations. Given plastic surgery integrates medical knowledge, aesthetic judgment, and doctor patient communication, a systematic evaluation of LLMs performance is needed. This study aims to assess the capabilities of three widely used LLMs-GPT-4o (OpenAI), DeepSeek R1 (DeepSeek), and Claude 3.5 (Anthropic)-in aesthetic plastic surgery including facial aesthetics, body contouring, and nonsurgical interventions, aiming to provide evidence-based recommendations for model selection across different clinical contexts and to inform future improvements in the design and optimization of domain-specific language models. A total of 125 questions were designed, covering multiple-choice examinations, clinical case analysis, expert guideline adherence, and patient consultation scenarios. Responses from each model were evaluated by three blinded plastic surgery experts based on predefined criteria, including accuracy, comprehensiveness, readability, humanistic care, and ethical considerations. DeepSeek R1 demonstrated performance that was superior to or at least comparable to GPT-4o and Claude 3.5 in multiple aspects, particularly in comprehensiveness (P = 0.04), readability (P < 0.001), and humanistic care (P < 0.001). While all models maintained reasonable safety and ethical standards, Claude 3.5 showed lower scores in trustworthiness and comprehensiveness, limiting its reliability in clinical decision support. Among the three evaluated LLMs, DeepSeek R1 excelled in comprehensiveness, readability, and humanistic care; GPT-4o performed well in scientific accuracy and safety, while Claude 3.5 showed relative strengths in logical coherence. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
- New
- Research Article
- 10.30693/smj.2025.14.10.118
- Oct 30, 2025
- Korean Institute of Smart Media
- Jin Gam Park + 1 more
Sentiment analysis of financial text is a critical area for investment decisions and market forecasting, and recent advances in large-scale language models (LLMs) have led to the development of domain-specific models, such as FinBERT, that achieve high performance. However, financial texts often contain a large amount of specialized terminology and numerical information, and if not handled with an appropriate structure, their performance may be inferior to general domain models. In this paper, we propose a method for efficiently learning from financial text by freezing the parameters of FinBERT and introducing a Mamba block-based adapter and a gating network. Our model achieves 91.03% accuracy and 0.9017 F1 score on the Financial PhraseBank dataset, improving 6.18%p and 0.0721 over the previous best model (LSTM-XGBoost), and 66.06% accuracy with 0.6394 F1 score on FiQA 2018 Task1, showing 3.64%p and 0.0393 improvements respectively. These results outperform both fully fine-tuned FinBERT and other PEFT techniques.
- New
- Research Article
- 10.36001/phmconf.2025.v17i1.4551
- Oct 26, 2025
- Annual Conference of the PHM Society
- Anandrao Todkar + 2 more
This paper presents a semantic framework to bridge the gap between IT-OT integration in industrial environments. The proposed solution addresses fundamental challenges of PHM (prognostics and health management) by providing contextualized semantic information from the shop floor to enterprise IT systems. Built upon an OPCUA (Open Platform Communications Unified Architecture) aggregation server architecture, the framework leverages OPCUA Information Models and companion specifications as its foundation for semantic representation. By transforming these models into knowledge graphs stored in RDF format, the system enables sophisticated semantic information retrieval through SPARQL-based semantic queries that can traverse complex relationships between equipment, processes, and operational parameters. The framework further implements GraphQL to automatically generate a Type schema derived from OPCUA types, creating a unified query interface that facilitates IT-like interaction with industrial data. This semantic approach significantly improves fault diagnostics, predictive maintenance, and anomaly detection by preserving contextual relationships that are often lost in traditional data integration methods. Furthermore, the GraphQL schema provides a structured foundation for generative AI applications to formulate contextually appropriate queries, extract relevant maintenance insights, and generate human-interpretable explanations of equipment health patterns, all while maintaining semantic fidelity across the IT-OT boundary. The vertical integration capability ensures that domain-specific models remain coherent across organizational levels such as line, area, floor, etc., enabling PHM practitioners to implement more effective condition-based maintenance strategies with improved visibility into causal factors affecting equipment reliability and performance.
- New
- Research Article
- 10.2166/wst.2025.156
- Oct 25, 2025
- Water Science & Technology
- Ramteja Sajja + 2 more
ABSTRACT Large Language Models (LLMs) have shown strong performance across natural language processing tasks, yet their general-purpose embeddings often fall short in domains with specialized terminology and complex syntax, such as hydrology and environmental science. This study introduces HydroEmbed, a suite of open-source sentence embedding models fine-tuned for four QA formats: multiple-choice (MCQ), true/false (TF), fill-in-the-blank (FITB), and open-ended questions. Models were trained on the HydroLLM Benchmark, a domain-aligned dataset combining textbook and scientific article content. Fine-tuning strategies included MultipleNegativesRankingLoss, CosineSimilarityLoss, and TripletLoss, selected to match each task's semantic structure. Evaluation was conducted on a held-out set of 400 textbook-derived QA pairs, using top-k similarity-based context retrieval and GPT-4o-mini for answer generation. Results show that the fine-tuned models match or exceed performance of strong proprietary and open-source baselines, particularly in FITB and open-ended tasks, where domain alignment significantly improves semantic precision. The MCQ/TF model also achieved competitive accuracy. These findings highlight the value of task- and domain-specific embedding models for building robust retrieval-augmented generation (RAG) pipelines and intelligent QA systems in scientific domains. This work represents a foundational step toward HydroLLM, a domain-specialized language model ecosystem for environmental sciences.
- New
- Research Article
- 10.1016/j.jbi.2025.104930
- Oct 23, 2025
- Journal of biomedical informatics
- Jianfu Li + 17 more
Exploring multimodal large language models on transthoracic Echocardiogram (TTE) tasks for cardiovascular decision support.
- New
- Research Article
- 10.1007/s11548-025-03533-8
- Oct 23, 2025
- International journal of computer assisted radiology and surgery
- Heinz U Lemke
Model-guided medicine (MGM) represents a paradigm shift in clinical practice, emphasizing the integration of computational models to support diagnosis, therapy planning and individualized patient care. The general and/or specific domain models, on which recommendations, decisions or actions of these systems are based, should reflect in their model identity certificate (MIC) the level of model relevance, truthfulness and transparency. Methods and tools for building models and their corresponding templates for a MIC in the domains of radiology and surgery should be drawn from relevant elements of a model science, specifically from mathematical modelling methods (e.g. for model truthfulness) and modelling informatics tools (e.g. for model transparency). Other elements or MIC classes to consider may include ethics, human-AI model interaction and model control. A generic template of a MIC with classes, attributes and examples for the general domain of health care is being proposed as an initial attempt to gain experience with the complexity of the problems associated with enhancing trustworthiness in models. This template is intended to serve as a framework for an instance of a specific template for robot assisted intervention for hepatocellular cancer within the domain of interventional radiology (work-in-progress). Gaining trustworthiness in intelligent systems based on models and related AI tools is a challenging undertaking and raises many critical questions, specifically those related to ascertain model relevance, truthfulness and transparency. The healthcare system, in particular the interventional medical disciplines, will have to be concerned about the availability of digital identity certificates to enable the control for these systems and related artefacts, e.g. digital twins, avatars, diagnostic and interventional robots, or intelligent agents.
- New
- Research Article
- 10.1038/s41746-025-02003-4
- Oct 23, 2025
- NPJ Digital Medicine
- Aakash Tripathi + 4 more
Harmonized ONcologY Biomedical Embedding Encoder (HONeYBEE) is an open-source framework that integrates multimodal biomedical data for oncology applications. It processes clinical data (structured and unstructured), whole-slide images, radiology scans, and molecular profiles to generate unified patient-level embeddings using domain-specific foundation models and fusion strategies. These embeddings enable survival prediction, cancer-type classification, patient similarity retrieval, and cohort clustering. Evaluated on 11,400+ patients across 33 cancer types from The Cancer Genome Atlas (TCGA), clinical embeddings showed the strongest single-modality performance with 98.5% classification accuracy and 96.4% precision@10 in patient retrieval. They also achieved the highest survival prediction concordance indices across most cancer types. Multimodal fusion provided complementary benefits for specific cancers, improving overall survival prediction beyond clinical features alone. Comparative evaluation of four large language models revealed that general-purpose models like Qwen3 outperformed specialized medical models for clinical text representation, though task-specific fine-tuning improved performance on heterogeneous data such as pathology reports.
- New
- Research Article
- 10.54097/7am6vk38
- Oct 20, 2025
- Mathematical Modeling and Algorithm Application
- Xuguang Zhang + 1 more
Large language models (LLMs) have emerged as transformative technologies in financial services, demonstrating unprecedented capabilities in extracting structured knowledge from unstructured financial documents, generating analytical insights, and supporting strategic corporate planning decisions. This review paper examines the comprehensive applications of LLMs including GPT-4, Claude, PaLM, and domain-specific financial models in automating knowledge extraction from diverse sources including earnings calls, financial reports, regulatory filings, and market commentary. We analyze how transformer-based architectures (TA) leverage attention mechanisms and contextual embeddings to understand complex financial terminology, temporal relationships, and causal connections in financial narratives. The paper explores advanced techniques including prompt engineering, few-shot learning, retrieval-augmented generation (RAG), and fine-tuning strategies that adapt general-purpose LLMs to specialized financial tasks. We examine applications in sentiment analysis of financial texts, automatic summarization of lengthy reports, entity recognition for companies and products, relationship extraction between financial events, and question-answering systems for financial queries. The review investigates how LLMs generate analytical insights through scenario analysis, trend identification, risk assessment, and competitive intelligence synthesis. We analyze corporate planning support applications including strategic initiative identification, market opportunity analysis, resource allocation recommendations, and investment thesis generation. Furthermore, we discuss integration architectures combining LLMs with structured databases, time-series models, and visualization tools to create comprehensive decision support systems. The paper addresses critical challenges including hallucination mitigation, accuracy verification, regulatory compliance, data privacy concerns, and the need for human oversight in high-stakes financial decisions. We examine evaluation methodologies for financial LLM applications, including domain-specific benchmarks, expert assessment protocols, and real-world performance metrics. Through synthesis of current research and deployed systems, we identify limitations including computational costs, update frequency challenges, bias in training data, and difficulties in explaining model reasoning. The review concludes by outlining promising research directions including multimodal financial analysis, real-time information integration, federated learning for privacy-preserving collaboration, and neuro-symbolic approaches combining neural language understanding with formal financial reasoning.
- New
- Research Article
- 10.54097/7ysr5k17
- Oct 19, 2025
- Computer Life
- Shaochen Ren + 1 more
Large language models (LLMs) have emerged as transformative technologies in cybersecurity, offering unprecedented capabilities in threat detection, vulnerability analysis, and intelligent decision-making. This review examines the application of LLMs across critical cybersecurity domains, including cyber threat intelligence (CTI), threat hunting, vulnerability detection, malware analysis, and decision support systems. The integration of LLMs such as Generative Pre-trained Transformer 4 (GPT-4), Bidirectional Encoder Representations from Transformers (BERT), Large Language Model Meta AI (LLaMA), and domain-specific models like SecureFalcon has demonstrated remarkable potential in automating complex security tasks, enhancing analyst productivity, and enabling proactive defense mechanisms. However, the deployment of LLMs in cybersecurity contexts introduces unique challenges, including prompt injection vulnerabilities, data poisoning risks, hallucination concerns, and ethical considerations regarding adversarial use. This paper synthesizes recent research advances, evaluates current LLM architectures and their security applications, examines real-world implementation challenges, and identifies critical gaps requiring further investigation. Through comprehensive analysis of over sixty recent studies, we highlight how LLMs are reshaping cybersecurity practices while emphasizing the necessity for robust security frameworks, continuous model validation, and responsible deployment strategies to mitigate emerging risks associated with these powerful artificial intelligence (AI) systems.
- New
- Research Article
- 10.3390/electronics14204094
- Oct 18, 2025
- Electronics
- Yifan Liu + 2 more
Industrial Internet of Things (IIoT) systems are increasingly exposed to sophisticated and rapidly evolving cyber threats. In response, this work proposes a proactive threat detection framework that leverages pretrained transformer-based language models to identify emerging attack patterns within IIoT ecosystems. This work introduces a transformer-based framework that fine-tunes domain-specific pretrained models (SecBERT, SecRoBERTa, CyBERT), derives potential attack-path patterns from vulnerability–tactic mappings, and incorporates a retrieval-based fallback mechanism. The fallback not only improves robustness under uncertainty, but also provides a practical solution to the absence of labeled datasets linking ICS-specific MITRE ATT&CK tactics with vulnerabilities, thereby filling a key research gap. Experiments show that the fine-tuned models substantially outperform traditional machine learning baselines; SecBERT achieves the best balance while maintaining high inference efficiency. Overall, the framework advances vulnerability-driven threat modeling in IIoT and offers a foundation for the proactive identification of attack patterns.
- Research Article
- 10.2196/56090
- Oct 15, 2025
- JMIRx Med
- Jorge Guerra Pires
BackgroundArtificial intelligence (AI) has evolved through various trends, with different subfields gaining prominence over time. Currently, conversational AI—particularly generative AI—is at the forefront. Conversational AI models are primarily focused on text-based tasks and are commonly deployed as chatbots. Recent advancements by OpenAI have enabled the integration of external, independently developed models, allowing chatbots to perform specialized, task-oriented functions beyond general language processing.ObjectiveThis study aims to develop a smart chatbot that integrates large language models from OpenAI with specialized domain-specific models, such as those used in medical image diagnostics. The system leverages transfer learning via Google’s Teachable Machine to construct image-based classifiers and incorporates a diabetes detection model developed in TensorFlow.js. A key innovation is the chatbot’s ability to extract relevant parameters from user input, trigger the appropriate diagnostic model, interpret the output, and deliver responses in natural language. The overarching goal is to demonstrate the potential of combining large language models with external models to build multimodal, task-oriented conversational agents.MethodsTwo image-based models were developed and integrated into the chatbot system. The first analyzes chest X-rays to detect viral and bacterial pneumonia. The second uses optical coherence tomography images to identify ocular conditions such as drusen, choroidal neovascularization, and diabetic macular edema. Both models were incorporated into the chatbot to enable image-based medical query handling. In addition, a text-based model was constructed to process physiological measurements for diabetes prediction using TensorFlow.js. The architecture is modular; new diagnostic models can be added without redesigning the chatbot, enabling straightforward functional expansion.ResultsThe findings demonstrate effective integration between the chatbot and the diagnostic models, with only minor deviations from expected behavior. Additionally, a stub function was implemented within the chatbot to schedule medical appointments based on the severity of a patient’s condition, and it was specifically tested with the optical coherence tomography and X-ray models.ConclusionsThis study demonstrates the feasibility of developing advanced AI systems—including image-based diagnostic models and chatbot integration—by leveraging AI as a service. It also underscores the potential of AI to enhance user experiences in bioinformatics, paving the way for more intuitive and accessible interfaces in the field. Looking ahead, the modular nature of the chatbot allows for the integration of additional diagnostic models as the system evolves.