Published in last 50 years
Articles published on Visual Analytics
- New
- Research Article
- 10.48175/ijarsct-29637
- Nov 3, 2025
- International Journal of Advanced Research in Science, Communication and Technology
- Lavanya Lokhande
In today’s business environment, data analytics plays a major role in understanding company performance and decision-making. This review paper focuses on the study of Sales Performance Analysis using Power BI. It highlights how Business Intelligence (BI) tools such as Power BI help organizations transform raw data into meaningful insights. The review summarizes existing research, techniques, and applications that demonstrate how sales dashboards help track profit, sales trends, and regional growth. The paper concludes that Power BI is one of the most effective tools for visual analytics, enabling companies to make data-driven business decisions
- New
- Research Article
- 10.32628/cseit251117239
- Oct 31, 2025
- International Journal of Scientific Research in Computer Science, Engineering and Information Technology
- Rathod Vasant Santosh + 3 more
Unified Framework for Deepfake Detection in Images, Videos, and Audio is a comprehensive system designed to identify manipulated multimedia content across multiple modalities. The project utilizes state-of-the-art deep learning techniques such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and spectrogram-based analysis to detect synthetic media generated by advanced AI tools. By integrating visual and auditory feature extraction pipelines, the framework ensures robust and reliable identification of fake images, video frame manipulations, and voice synthesis-based deepfakes. The proposed unified approach eliminates the need for separate detection systems by combining multimodal data analysis within a single architecture. Developed using Python, TensorFlow, Flask, and React.js, the framework supports real-time detection, visual analytics, and alert mechanisms for suspected deepfake content. Experimental results demonstrate high detection accuracy and adaptability against emerging deepfake generation techniques, confirming the system’s potential in digital forensics, social media verification, and cybersecurity applications. This work emphasizes the importance of developing unified, AI-driven tools to combat misinformation and safeguard the authenticity of digital content in modern communication networks.
- New
- Research Article
- 10.33650/jeecom.v7i2.12624
- Oct 29, 2025
- Journal of Electrical Engineering and Computer (JEECOM)
- Faiz Firdausi + 4 more
The sales administration of sugarcane at CV Al Ameen, Jember, is still managed manually, resulting in risks such as inaccurate records, delays in shipment monitoring, and irregular fund disbursement. The lack of a centralized system hinders real-time transaction recapitulation and creates opportunities for fraud, particularly duplicate payment claims. These inefficiencies not only threaten financial accuracy but also undermine the reliability of business reporting. To address these issues, this study proposes a digital administration system that automates transaction recording, verifies payment claims, and improves distribution monitoring accuracy.The system integrates a Telegram Bot for payment automation and interactive visual analytics to monitor both distribution and financial transactions in real time. It is initially implemented as an offline local application to ensure accessibility, with the potential for future adaptation to web- or cloud-based platforms. Key features include automatic transaction recap, Telegram-based notifications, and periodic reporting to business stakeholders without the need for manual record-keeping. The Telegram Bot employs unique delivery identifiers to validate claims, ensuring that each payment request is processed only once, thereby reducing the risk of fraudulent activity.System communication is achieved through the Telegram API and webhook mechanism, enabling automated updates on new transactions, shipment status, and fund disbursement. Furthermore, the bot supports user queries for transaction summaries and payment reminders. The system development follows the Agile Model, which allows iterative design and continuous refinement in line with business partner requirements. The findings demonstrate that the integration of automation and analytics significantly enhances accuracy, efficiency, and transparency in sugarcane sales administration.
- New
- Research Article
- 10.30845/ijll.v12p10
- Oct 29, 2025
- International Journal of Language & Linguistics
- Yuwen Zhao + 1 more
This study employs CiteSpace, a visual analytics tool, to conduct a knowledge mapping analysis of research on L2 oral fluency published in Chinese journals between 2010 and 2025. The aim is to delineate the core themes and research frontiers within this domain and to explore its future development trend. The findings reveal a resurgence of interest in L2 oral fluency research in China; however, a stable core group of authors has not yet formed, and scholarly collaboration networks remain relatively sparse and fragmented. Current research hotspots are primarily concentrated on the measurement of oral fluency and pedagogical interventions. Studies predominantly focus on English learners, suggesting a need for broadening the research perspective. Future research should prioritize strengthening academic collaboration, promoting the integration of technology, and fostering synergistic development between theory and practice.
- New
- Research Article
- 10.55041/ijsrem53277
- Oct 29, 2025
- INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
- Greeshma B + 1 more
ABSTRACT The paper present StockInsight, an end-to-end platform for interactive stock trend analysis and forecasting using Long Short-Term Memory (LSTM) deep learning. StockInsight integrates historical price data retrieval, preprocessing, model training, and web-based visualization. Historical stock data (Open, High, Low, Close, Volume) are obtained via the Yahoo Finance API for multiple equities over 10–15 years. Data cleaning and feature engineering (e.g., moving averages, sequence windows) are performed in Python using Pandas/Numpy. The forecasting core is a multivariate LSTM recurrent neural network with two LSTM layers followed by dense output layers, trained on 100-day rolling windows to predict next-day closing prices. evaluate the model using standard regression metrics (RMSE, MAE) and find it achieves substantial accuracy on test stocks. The web application (Flask-based) provides interactive charts of actual v/s predicted prices, forecast tables for the next 10 days, and trend visualizations. In experiments with tech-sector stocks, StockInsight’s LSTM forecasts closely track real price movements and improve on baseline ARIMA-like performance. Sample outputs include time-series plots of predictions v/s actual prices and tabulated multi-day forecasts. The system’s design, predictions, and user interface are discussed in detail, along with evaluation across varying market conditions, highlighting implications for retail traders. Keywords: Stock market forecasting, LSTM, time-series prediction, deep learning, data visualization, Yahoo Finance API, web analytics.
- New
- Research Article
- 10.1007/s12650-025-01088-z
- Oct 28, 2025
- Journal of Visualization
- Asal Jalilvand + 2 more
SeRViz: a visual analytics system for the analysis of sequential rules and its application to airport ground handling operations
- New
- Research Article
- 10.2174/011570159x394299251006114128
- Oct 28, 2025
- Current neuropharmacology
- Yu-Han Wu + 5 more
Neural regeneration remains a highly debated topic, yet it lacks a systematic bibliometric analysis. The objective of this study is to utilize bibliometric methods to identify research trends and significant topics within this domain, thereby providing a comprehensive overview of the current state of knowledge in this field. The Web of Science Core Collection (January 1, 2015 to October 3, 2024) served as the basis for analyzing 3,941 documents using CiteSpace and VOSviewer. The analysis focused on country/institution collaboration networks, keyword co-occurrence, and hotspot evolution. Between 2015 and 2024, the number of publications in this field demonstrated an upward trend, characterised by fluctuations. China and the United States were the leading contributors to global research output, with China contributing 1,387 papers, accounting for 35.19% of the total, and boasting an H-index of 62. In contrast, the United States contributed 1,047 papers, with an h-index of 74. In recent years, research has been concentrated on four major technological directions, including neural electrical stimulation, biomaterial scaffolds, gene editing, and neural modulation. This transformation in scholarly focus reflects the convergence of multiple catalytic factors, which have enabled the sophisticated simulation of neural systems, provided unprecedented analytical tools for neuroscience inquiry, and intensified societal demands for artificial intelligence applications and neurotechnology innovations, thereby stimulating accelerated research investment. Over the past decade, researchers worldwide have focused on neural regeneration. Bibliometric analyses have assessed scholarship, identified research hotspots, summarized core concepts, and provided valuable insights for future research in this field.
- New
- Research Article
- 10.5194/nhess-25-4089-2025
- Oct 22, 2025
- Natural Hazards and Earth System Sciences
- Julius Schlumberger + 6 more
Abstract. With accelerating climate change, the impacts of natural hazards will compound and cascade, making them more complex to assess and manage. At the same time, tools that help decision-makers choose between different management options are limited. This study introduces a visual analytics dashboard prototype (https://www.pathways-analysis-dashboard.net/, last access: 18 October 2025) designed to support pathways analysis for multi-risk Disaster Risk Management (DRM). Developed through a systematic design approach, the dashboard employs interactive visualisations of pathways and their evaluation, including Decision Trees, Parallel Coordinates Plots, Stacked Bar Charts, Heatmaps, and Pathways Maps, to facilitate complex, multi-criteria decision-making under uncertainty. We demonstrate the utility of the dashboard through an evaluation with 54 participants at varying levels and disciplines of expertise. Depending on the expertise (non-experts, adaptation / DRM experts, pathways experts), users were able to interpret the options of the pathways, the performance of the pathways, the timing of the decisions, and perform a system analysis that accounts for interactions between the sectoral DRM pathways with precision between 71 % and 80 %. Participants particularly valued the dashboard's interactivity, which allowed for scenario exploration, added additional information on demand, or offered additional clarifying data. Although the dashboard effectively supports the comparative analysis of pathway options, the study highlights the need for additional guidance and onboarding resources to improve accessibility and opportunities to generalise the prototype developed to be applied in different case studies. Tested as a standalone tool, the dashboard may have additional value in participatory analysis and modelling. This study underscores the value of visual analytics for the DRM and Decision Making Under Deep Uncertainty (DMDU) communities, with implications for broader applications across complex and uncertain decision-making scenarios.
- New
- Research Article
- 10.3390/data10100167
- Oct 21, 2025
- Data
- Vladimir Zhurov + 2 more
Despite the critical importance of biomedical databases like MEDLINE, users are often hampered by search tools with stagnant designs that fail to support complex exploratory tasks. To address this limitation, we synthesized research from visual analytics and related fields to propose a new design framework for non-traditional search interfaces. This framework was built upon seven core principle: visualization, interaction, machine learning, ontology, triaging, progressive disclosure, and evolutionary design. For each principle, we detail its rationale and demonstrate how its integration can transcend the limitations of conventional search tools. We contend that by leveraging this framework, designers can create more powerful and effective search tools that empower users to navigate complex information landscapes.
- New
- Research Article
- 10.5194/ica-adv-5-7-2025
- Oct 20, 2025
- Advances in Cartography and GIScience of the ICA
- Jinyi Cai + 14 more
Abstract. Cancer mapping is critical for understanding the spatial patterns of cancer burden, identifying disparities and informing targeted interventions. However, the limited availability of accessible, local-level cancer data and user-friendly mapping tools hinders both professional users, who need finer-scale data to analyze community-level cancer burden, and the general public, who need clear and intuitive visualizations to better understand their cancer risk. Cancer Analytics and Maps for Small Areas (CAMSA) is a visual analytics platform designed to address the diverse needs of end users, including the general public, public health professionals, and researchers, by visualizing small-area cancer data. This paper presents the early stages of CAMSA’s development following an iterative user-centered design (UCD) process. Through needs assessment interviews, usability evaluation focus groups and implementation capacity surveys, we identified five use cases: cancer burden exploration, health disparities identification, risk factor analysis, customized spatial and statistical analysis, and communication and collaboration. The alpha version of CAMSA was developed to fulfill core functional requirements to detect spatial patterns (e.g., clusters) of cancer burden across different stratification groups including race, sex and year. Usability evaluations, conducted through post-development focus groups, informed the extended functional requirements for the beta version to enhance its functionality. Findings from this iterative process underscore the importance of meeting the needs of the general public (providing comprehensible knowledge), and public health professionals and researchers (clarification of statistical uncertainty). This study showcases the effectiveness of user-centered design in ensuring the accessibility and practicality of CAMSA.
- New
- Research Article
- 10.1177/14738716251372584
- Oct 18, 2025
- Information Visualization
- Sanne Van Der Linden + 3 more
Event sequence data consists of discrete events that happen over time. By grouping events based on common entities and ordering them chronologically, they form sequences. Events are registered in different domains, ranging from healthcare to logistics. Collections of these sequences typically represent high-level processes for users to discover, identify, and analyze. This discovery is challenging, given that sequences in real-world scenarios can grow long, have many events, many attribute dimensions of events, and/or various event categories. However, limited research focuses on analyzing long event sequences, the focus of this paper. We present LoLo, an interactive visual analytics method based on the analysis of multi-level structures in long event sequence collections. LoLo introduces a strategy to split the sequence collection into meaningful data-driven stages, where the definition of a stage facilitates interpretation and injection of domain knowledge. The stages have different levels, which represent high-level processes taking into account high-level changes (global staging) combined with local sequence variations (local staging). We demonstrate the effectiveness of LoLo by comparing it to a baseline and present two use cases, one is evaluated with two users and the other by us, on real-world data sets showing that our staging method can capture the semantic content in stages and users appreciate being able to switch between different levels of detail.
- New
- Research Article
- 10.3390/rs17203442
- Oct 15, 2025
- Remote Sensing
- Roghayeh Heidari + 2 more
Understanding spatial variability is central to precision agriculture, yet terrain features are often overlooked in remote sensing workflows that inform agronomic decision-making. This work introduces a terrain-aware visual analytics approach that integrates landform classification with crop performance analysis to better support field-level decisions. Terrain features are an important contributor to yield variability, alongside environmental conditions, soil properties, and management practices. However, they are rarely integrated systematically into performance analysis and decision-making workflows—limiting the potential for terrain-aware insights in precision agriculture. Addressing this gap requires approaches that incorporate terrain attributes and landform classifications into agricultural performance analysis and management zone (MZ) delineation—ideally through visual analytics that offer interpretable insights beyond the constraints of purely data-driven methods. We introduce an interactive focus+context visualization tool that integrates multiple data layers—including terrain features, vegetation index–based performance metric, and management zones—into a unified, expressive view. The system leverages freely available remote sensing imagery and terrain data derived from Digital Elevation Models (DEMs) to evaluate crop performance and landform characteristics in support of agronomic analysis. The tool was applied to eleven agricultural fields across the Canadian Prairies under diverse environmental conditions. Fields were segmented into depressions, hilltops, and baseline areas, and crop performance was evaluated across these landform groups using the system’s interactive visualization and analytics. Depressions and hilltops consistently showed lower mean performance and higher variability (measured by coefficient of variation) compared to baseline regions, which covered an average of 82% of each field. We also subdivided baseline areas using slope and the Sediment Transport Index (STI) to investigate soil erosion effects, but field-level patterns were inconsistent and no systematic differences emerged across all sites. Expert evaluation confirmed the tool’s usability and its value for field-level decision support. Overall, the method enhances terrain-aware interpretation of remotely sensed data and contributes meaningfully to refining management zone delineation in precision agriculture.
- Research Article
- 10.3390/app152010927
- Oct 11, 2025
- Applied Sciences
- Luke Nichols + 2 more
State Departments of Transportation (DOTs) face challenges with traditional bridge inspections that are time-consuming, inconsistent, and paper-based. This study focused on an existing research gap regarding automated methods that streamline the bridge inspection process, prioritize maintenance effectively, and allocate resources efficiently. Thus, this paper introduces a digitalized bridge inspection framework by integrating Building Information Modeling (BIM) and Business Intelligence (BI) to enable near-real-time monitoring and digital documentation. This study adopts a Design Science Research (DSR) methodology, a recognized paradigm for developing and evaluating the innovative SmartBridge to address pressing bridge inspection problems. The method involved designing an Autodesk Revit-based plugin for data synchronization, element-specific comments, and interactive dashboards, demonstrated through an illustrative 3D bridge model. An illustrative example of the digitalized bridge inspection with the proposed framework is provided. The results show that SmartBridge streamlines data collection, reduces manual documentation, and enhances decision-making compared to conventional methods. This paper contributes to this body of knowledge by combining BIM and BI for digital visualization and predictive analytics in bridge inspections. The proposed framework has high potential for hybridizing digital technologies into bridge infrastructure engineering and management to assist transportation agencies in establishing a safer and efficient bridge inspection approach.
- Research Article
- 10.31127/tuje.1715271
- Oct 7, 2025
- Turkish Journal of Engineering
- Priyanka Deshmukh + 3 more
Recent advancements in conversational AI have improved task efficiency but often neglect the emotional and cognitive diversity of users. This research introduces a novel, user-centered framework for emotionally adaptive chatbots that integrates ML-based emotion recognition with personalized responses that are ethically filtered — meaning they are designed to respect user privacy, fairness, and transparency principles. The Berlin Emotional Speech Database (EmoDB) was used to train and evaluate three machine learning models using MFCC features. Among them, the XGBoost model achieved the highest classification accuracy of 77.6%, outperforming Random Forest (75.0%) and SVM (68.2%). To evaluate user experience, a dataset of 385 participants was generated using a 15-item Likert-scale questionnaire adapted from the UTAUT model and extended with trust and emotional alignment measures. Statistical tests, including a t-test (p = 0.711) between neurodiverse and non-neurodiverse users and an ANOVA (p = 0.337) across domains, confirmed the consistency and inclusivity of perceived satisfaction. Visual analytics, including correlation heatmaps and radar charts, revealed that users with predicted emotions such as happiness and neutral reported the highest satisfaction scores (mean = 4.49, SD = 0.29 and mean = 4.26, SD = 0.31, respectively). A seven-layered modular architecture was proposed, supporting real-time emotional adaptivity, personalization, and ethical compliance. The framework is integration-ready with NLP engines like GPT and Dialogflow, offering a scalable solution for affective AI deployment across healthcare, education, and public service domains.
- Research Article
- 10.55041/ijsrem52890
- Oct 6, 2025
- INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
- Samarpita Kule + 4 more
Abstract - Effective study planning is a crucial skill for students to manage their academic workload, yet conventional approaches often rely on static task lists and fixed timers that do not adapt to individual learning styles. As a result, students face challenges in prioritizing tasks, sustaining focus during study sessions, and evaluating their productivity, which may lead to poor time management, irregular study habits, and increased academic stress. To address these issues, this work proposes Planora, an intelligent, web-based study planner that integrates adaptive learning techniques with algorithm-driven task management. The system is developed using the Flask framework and MongoDB database, providing students with a personalized dashboard to manage study routines more effectively. A greedy priority-based scheduling algorithm is employed to organize tasks according to deadlines, importance, and estimated duration, ensuring that critical activities are completed on time. A key innovation of Planora is its adaptive Pomodoro timer, which adjusts focus intervals based on a student’s recent performance—reducing session length to avoid fatigue or extending it to enhance productivity. The system further computes a weekly focus score using weighted averages derived from session completion and distraction patterns. Additional features include a Generative AI-powered study assistant for academic support and visual analytics to track subject-wise efforts and productivity trends. By combining intelligent algorithms with behavioral insights, Planora offers a dynamic, data-driven study planning experience tailored to each student’s learning needs
- Research Article
- 10.1093/jas/skaf300.288
- Oct 4, 2025
- Journal of Animal Science
- Adeolu J Adekunle + 1 more
Abstract The beef cattle industry contributes over $86 billion annually to the United States agriculture GDP. However, fluctuating market prices, volatile feed costs, and resource management inefficiencies threaten beef production sustainability. To address this issue, we developed the Feedlot Economic Decision Visualization Tool (FEDVT), a data-driven decision-support system that integrates real-time data analytics to optimize feedlot productivity and profitability. Data visualization and analysis were conducted using a custom-built tool integrating Microsoft Excel and Tableau. Excel was employed for data preprocessing, statistical summarization, and preliminary trend analysis, while Tableau facilitated interactive and dynamic visualization of key metrics. The tool integrates historical feedlot performance data from trusted sources to generate predictive economic insights. A moving average model was applied to analyze trends in feed costs, operational expenses, and market price fluctuations. Real-time cattle futures data were incorporated to support the tool’s predictive capabilities. FEDVT features an interactive dashboard for enhanced usability and is designed in collaboration with the Texas A&M Institute of Data Science. The dashboard allows users to input key operational parameters and simulate various economic scenarios through “what-if” analyses, empowering feedlot stakeholders to assess potential financial outcomes before making strategic decisions. Using historical dataset, the tool was tested against real-world economic conditions, and preliminary validation was conducted through industry stakeholder feedback. Results indicate that FEDVT can potentially improve economic decision-making by providing real-time data visualization and predictive analytics tailored to feedlot operations. The system enhances transparency in cost-benefit analyses and enables data-driven resource allocation strategies. To further refine the tool’s decision-making capabilities, future iterations will incorporate additional economic and environmental sustainability metrics, and integrate Artificial intelligence powered predictions to enhance the quality of insights and recommendations.
- Research Article
- 10.3389/fphys.2025.1661850
- Oct 3, 2025
- Frontiers in Physiology
- Huiyuan Huang + 5 more
ObjectiveAcne vulgaris is recognized as one of the top eight most disabling dermatological diseases globally. Acupuncture has emerged as a clinically valuable and widely practiced intervention for acne, with the World Health Organization endorsing it as an effective non-pharmacological treatment. While existing evidence demonstrates acupuncture’s ability to significantly improve acne symptoms, the research remains scattered and lacks comprehensive synthesis. This scoping review systematically maps the current clinical research on acupuncture for acne treatment to identify knowledge gaps and inform future research directions.MethodsA systematic search was conducted across PubMed, EMBASE, the Cochrane Library, Web of Science, AMED, SinoMed, CNKI, WanFang, and VIP databases to identify relevant studies published between January 2014 and October 2024. Data extraction and synthesis were performed using descriptive statistics and visual analytics. The review followed the PRISMA-ScR guidelines and was prospectively registered with the OSF.ResultsThis study included 114 eligible studies, comprising 48 randomized controlled trials, 63 non-randomized interventional studies, and 3 systematic reviews, with the vast majority conducted in China. After 2019, the publication output of acupuncture studies for acne treatment showed a declining trend, which was generally consistent with changes in research funding. Cochrane risk-of-bias assessment revealed that the overall methodological quality of RCTs was moderate, with a low proportion of high-quality studies. The main acupuncture interventions for acne included filiform needle acupuncture, pricking-cupping, fire needling, autohemotherapy, bloodletting therapy, and catgut embedding at acupoints, with Ashi point (local lesion area) being the most frequently selected acupoint. Among the 16 outcome measures evaluated, the effective rate was the most commonly used indicator. Overall, acupuncture demonstrated good safety in treating acne, although fire needling showed a significantly higher frequency of adverse events compared to other therapies.ConclusionAs a globally prevalent complementary therapy, acupuncture has established a substantial research base for acne treatment; however, methodological limitations persist in existing studies. Future research should conduct multicenter, large-sample randomized controlled trials adhering to standardized reporting guidelines, develop comprehensive efficacy evaluation systems incorporating objective indicators, and investigate connections between clinical outcomes and mechanistic pathways. These efforts will elevate the evidence level for acupuncture in acne management.Systematic Review Registrationhttps://doi.org/10.17605/OSF.IO/S2QT6.
- Research Article
- 10.1007/s44311-025-00025-5
- Oct 3, 2025
- Process Science
- Marie-Christin Häge + 1 more
Abstract Organizations use conformance checking, a sub-discipline of process mining, which compares process executions with predefined process models, to identify deviations in business processes. These capabilities make conformance checking highly relevant. To make its results accessible and tailored to specific tasks, effective visualizations are essential. Although the need for such visualizations has already been identified and acknowledged by researchers in process mining, so far, the development of these visualizations has been left to tool providers, leading to visualizations that are highly different and difficult to compare. A better understanding of these existing visualizations and their components would support research to gain deeper insights and assess them more closely. To address this gap and establish a foundation for future research, this paper provides an overview of the existing breadth of characteristics of conformance checking visualizations in the form of a taxonomy. It enables to describe and assess existing visualizations in a structured manner, achieving a detailed understanding, which was not possible before. It consists of six dimensions, which systematically highlight what information is displayed and how this is visualized across academic and commercial tools. We evaluate the taxonomy through expert interviews and demonstrate its applicability through four individual exemplary visualizations and an assessment of common visualizations across tools. Our research enhances the comprehension of visual analytics in process mining, particularly for conformance checking, and highlights promising future research avenues.
- Research Article
- 10.54963/ia.v1i2.1665
- Oct 2, 2025
- Intelligent Agriculture
- Jaydish John Kennedy + 1 more
This work presents an AI system powered by artificial intelligence and based on deep learning for diagnosing and detecting plant diseases. Using a CNN that has been trained and optimized on the Plant Village dataset, major crops such as tomatoes, potatoes, and bell peppers can have their illnesses properly classified. The method provides comprehensive diagnostic data, including taxonomy, organisms responsible for the disease, nutritional deficit mimics, and external symptoms, in addition to illness class predictions. Innovatively, the system incorporates the Rich Python library, which enables a graphical, colour-coded command-line interface. Because of this, users can receive detailed, interactive feedback within the terminal itself. The programme was designed with easy use in mind and is intended for use by researchers, educators, and farmers in real-world agricultural settings. Facilitating the detection and understanding of plant health issues in real time aids in learning and practical decision-making. This study demonstrates how integrating AI with agricultural diagnostics can enhance interpretability, usefulness, and overall impact. Finally, it stresses how technology based on deep learning could revolutionize crop health monitoring and agricultural education.
- Research Article
- 10.1016/j.cag.2025.104356
- Oct 1, 2025
- Computers & Graphics
- Xingyu Liu + 7 more
CausalPrism: A visual analytics approach for subgroup-based causal heterogeneity exploration