Related Topics
Articles published on Interactive visual analysis
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
389 Search results
Sort by Recency
- New
- Research Article
- 10.1371/journal.pone.0334779
- Jan 27, 2026
- PLOS One
- Michael Ginda + 5 more
Assessing and evaluating programmatic outcomes of graduate education programs help stakeholders understand and respond to challenges that emerge over the course of a complex academic program. Given the increased complexity of program activities and outcomes, there is a need for semi-automatic, interactive visual analytics tools that transform data into actionable insights to inform decision-making. This paper documents the results of the evaluation planning and annual workflow setup to create dynamic assessment reports for the Complex Networks and Systems NSF Research Traineeship (CNS NRT) program at Indiana University. The CNS NRT evaluation workflow relied on institutional, survey, and publication data and the freely available tools to guide decision making and communicate the achievements of faculty and doctoral trainees who participated in the program between 2017 and 2024. The evaluation data to date show that the program is judged by participants as meeting its goals and that there has been considerable evidence of research productivity as indicated in the publication data. These clear gains in short-term outcomes show that the CNS NRT is on a pathway to achieving the medium and longer-term goals of the project, which will be examined over a longer period.
- Research Article
- 10.1109/tvcg.2025.3634232
- Jan 1, 2026
- IEEE transactions on visualization and computer graphics
- Maurice Koch + 3 more
An essential task in analyzing collaborative design processes, such as those that are part of workshops in design studies, is identifying design outcomes and understanding how the collaboration between participants formed the results and led to decision making. However, findings are typically restricted to a consolidated textual form based on notes from interviews or observations. A challenge arises from integrating different sources of observations, leading to large amounts and heterogeneity of collected data. To address this challenge we propose a practical, modular, and adaptable framework of workshop setup, multimodal data acquisition, AI-based artifact extraction, and visual analysis. Our interactive visual analysis system, reCAPit, allows the flexible combination of different modalities, including video, audio, notes, or gaze, to analyze and communicate important workshop findings. A multimodal streamgraph displays activity and attention in the working area, temporally aligned topic cards summarize participants' discussions, and drill-down techniques allow inspecting raw data of included sources. As part of our research, we conducted six workshops across different themes ranging from social science research on urban planning to a design study on band-practice visualization. The latter two are examined in detail and described as case studies. Further, we present considerations for planning workshops and challenges that we derive from our own experience and the interviews we conducted with workshop experts. Our research extends existing methodology of collaborative design workshops by promoting data-rich acquisition of multimodal observations, combined AI-based extraction and interactive visual analysis, and transparent dissemination of results.
- Research Article
- 10.1109/tvcg.2025.3634794
- Jan 1, 2026
- IEEE transactions on visualization and computer graphics
- Simon Warchol + 7 more
Dimensionality reduction techniques help analysts make sense of complex, high-dimensional spatial datasets, such as multiplexed tissue imaging, satellite imagery, and astronomical observations, by projecting data attributes into a two-dimensional space. However, these techniques typically abstract away crucial spatial, positional, and morphological contexts, complicating interpretation and limiting insights. To address these limitations, we present SEAL, an interactive visual analytics system designed to bridge the gap between abstract 2D embeddings and their rich spatial imaging context. SEAL introduces a novel hybrid-embedding visualization that preserves image and morphological information while integrating critical high-dimensional feature data. By adapting set visualization methods, SEAL allows analysts to identify, visualize, and compare selections-defined manually or algorithmically-in both the embedding and original spatial views, facilitating a deeper understanding of the spatial arrangement and morphological characteristics of entities of interest. To elucidate differences between selected sets of items, SEAL employs a scalable surrogate model to calculate feature importance scores, identifying the most influential features governing the position of objects within embeddings. These importance scores are visually summarized across selections, with mathematical set operations enabling detailed comparative analyses. We demonstrate SEAL's effectiveness and versatility through three case studies: colorectal cancer tissue analysis with a pharmacologist, melanoma investigation with a cell biologist, and exploration of sky survey data with an astronomer. These studies underscore the importance of integrating image context into embedding spaces when interpreting complex imaging datasets. Implemented as a standalone tool while also integrating seamlessly with computational notebooks, SEAL provides an interactive platform for spatially informed exploration of high-dimensional datasets, significantly enhancing interpretability and insight generation.
- Research Article
- 10.1093/database/baaf080
- Nov 26, 2025
- Database: The Journal of Biological Databases and Curation
- Clémentine Battistel + 11 more
Sequencing technologies continue to evolve, providing novel opportunities for disease surveillance and control. These advancements are crucial for diagnosing diseases and identifying genetically distinct variants with diverse host reservoir species and geographical distributions. Recent progress in sequencing-based analyses of marine mollusc diseases has been significant, yet challenges remain in data management due to a lack of dedicated tools and databases. To address this, we present MoPSeq-DB (Mollusc Pathogen Sequences DataBase), an open-source web application for managing curated genomic data on mollusc pathogens. Designed for accessibility to non-bioinformaticians, MoPSeq-DB features interactive data visualization and integrated analysis tools. Built with the Python Django framework, it automates common bioinformatics workflows, enabling rapid exploration of sequencing data. The application has minimal hardware requirements, and is easy to install, host, and update. MoPSeq-DB facilitates systematic storage and flexible management of genomic data and metadata, improving data organization for mollusc pathogen research. Although developed with a focus on mollusc pathogens, the platform’s adaptable design makes it a valuable resource for studying a wide range of pathogens.Database URL: https://mopseq-db.ifremer.fr
- Research Article
- 10.1109/tvcg.2025.3633897
- Nov 21, 2025
- IEEE transactions on visualization and computer graphics
- Kresimir Matkovic + 3 more
The interactive visual analysis of set-typed data, i.e., data with attributes that are of type set, is a rewarding area of research and applications. Valuable prior work has contributed solutions that enable the study of such data with individual set-typed dimensions. In this paper, we present CrossSet, a novel method for the joint study of two set-typed dimensions and their interplay. Based on a task analysis, we describe a new, multi-scale approach to the interactive visual exploration and analysis of such data. Two set-typed data dimensions are jointly visualized using a hierarchical matrix layout, enabling the analysis of the interactions between two set-typed attributes at several levels, in addition to the analysis of individual such dimensions. CrossSet is anchored at a compact, large-scale overview that is complemented by drill-down opportunities to study the relations between and within the set-typed dimensions, enabling an interactive visual multi-scale exploration and analysis of bivariate set-typed data. Such an interactive approach makes it possible to study single set-typed dimensions in detail, to gain an overview of the interaction and association between two such dimensions, to refine one of the dimensions to gain additional details at several levels, and to drill down to the specific interactions of individual set-elements from the set-typed dimensions. To demonstrate the effectiveness and efficiency of CrossSet, we have evaluated the new method in the context of several application scenarios.
- Research Article
- 10.1182/blood-2025-2180
- Nov 3, 2025
- Blood
- Siddhartha Mantrala + 9 more
Myscape: Myeloma single cell atlas for precision exploration reveals stage-specific T cell dysregulation associated with progression and therapy
- Research Article
- 10.33650/jeecom.v7i2.12624
- Oct 29, 2025
- Journal of Electrical Engineering and Computer (JEECOM)
- Faiz Firdausi + 4 more
The sales administration of sugarcane at CV Al Ameen, Jember, is still managed manually, resulting in risks such as inaccurate records, delays in shipment monitoring, and irregular fund disbursement. The lack of a centralized system hinders real-time transaction recapitulation and creates opportunities for fraud, particularly duplicate payment claims. These inefficiencies not only threaten financial accuracy but also undermine the reliability of business reporting. To address these issues, this study proposes a digital administration system that automates transaction recording, verifies payment claims, and improves distribution monitoring accuracy.The system integrates a Telegram Bot for payment automation and interactive visual analytics to monitor both distribution and financial transactions in real time. It is initially implemented as an offline local application to ensure accessibility, with the potential for future adaptation to web- or cloud-based platforms. Key features include automatic transaction recap, Telegram-based notifications, and periodic reporting to business stakeholders without the need for manual record-keeping. The Telegram Bot employs unique delivery identifiers to validate claims, ensuring that each payment request is processed only once, thereby reducing the risk of fraudulent activity.System communication is achieved through the Telegram API and webhook mechanism, enabling automated updates on new transactions, shipment status, and fund disbursement. Furthermore, the bot supports user queries for transaction summaries and payment reminders. The system development follows the Agile Model, which allows iterative design and continuous refinement in line with business partner requirements. The findings demonstrate that the integration of automation and analytics significantly enhances accuracy, efficiency, and transparency in sugarcane sales administration.
- Research Article
- 10.1177/14738716251372584
- Oct 18, 2025
- Information Visualization
- Sanne Van Der Linden + 3 more
Event sequence data consists of discrete events that happen over time. By grouping events based on common entities and ordering them chronologically, they form sequences. Events are registered in different domains, ranging from healthcare to logistics. Collections of these sequences typically represent high-level processes for users to discover, identify, and analyze. This discovery is challenging, given that sequences in real-world scenarios can grow long, have many events, many attribute dimensions of events, and/or various event categories. However, limited research focuses on analyzing long event sequences, the focus of this paper. We present LoLo, an interactive visual analytics method based on the analysis of multi-level structures in long event sequence collections. LoLo introduces a strategy to split the sequence collection into meaningful data-driven stages, where the definition of a stage facilitates interpretation and injection of domain knowledge. The stages have different levels, which represent high-level processes taking into account high-level changes (global staging) combined with local sequence variations (local staging). We demonstrate the effectiveness of LoLo by comparing it to a baseline and present two use cases, one is evaluated with two users and the other by us, on real-world data sets showing that our staging method can capture the semantic content in stages and users appreciate being able to switch between different levels of detail.
- Research Article
- 10.3390/s25185617
- Sep 9, 2025
- Sensors (Basel, Switzerland)
- Flor De Luz Palomino Valdivia + 1 more
Multivariate time series analysis presents significant challenges due to its dynamism, heterogeneity, and scalability. Given this, preprocessing is considered a crucial step to ensure analytical quality. However, this phase falls solely on the user without system support, resulting in wasted time, subjective decision-making, and cognitive overload, and is prone to errors that affect the quality of the results. This situation reflects the lack of interactive visual analysis approaches that effectively integrate preprocessing with guidance mechanisms. The main objective of this work was to design and develop a guidance system for interactive visual analysis in multivariate time series preprocessing, allowing users to understand, evaluate, and adapt their decisions in this critical phase of the analytical workflow. To this end, we propose a new guide-based approach that incorporates recommendations, explainability, and interactive visualization. This approach is embodied in the GUIAVisWeb tool, which organizes a workflow through tasks, subtasks, and preprocessing algorithms; recommends appropriate components through consensus validation and predictive evaluation; and explains the justification for each recommendation through visual representations. The proposal was evaluated in two dimensions: (i) quality of the guidance, with an average score of 6.19 on the Likert scale (1–7), and (ii) explainability of the algorithm recommendations, with an average score of 5.56 on the Likert scale (1–6). In addition, a case study was developed with air quality data that demonstrated the functionality of the tool and its ability to support more informed, transparent, and effective preprocessing decisions.
- Research Article
- 10.2967/jnmt.125.270082
- Sep 9, 2025
- Journal of nuclear medicine technology
- Irena Maříková + 8 more
The aim of the study was to validate a new method for semiautomatic subtraction of [99mTc]Tc-sestamibi and [99mTc]NaTcO4 SPECT 3-dimensional datasets using principal component analysis (PCA) against the results of parathyroid surgery and to compare its performance with an interactive method for visual comparison of images. We also sought to identify factors that affect the accuracy of lesion detection using the two methods. Methods: Scintigraphic data from [99mTc]Tc-sestamibi and [99mTc]NaTcO4 SPECT were analyzed using semiautomatic subtraction of the 2 registered datasets based on PCA applied to the region of interest including the thyroid and an interactive method for visual comparison of the 2 image datasets. The findings of both methods were compared with those of surgery. Agreement with surgery was assessed with respect to the lesion quadrant, affected side of the neck, and the patient positivity regardless of location. Results: The results of parathyroid surgery and histology were available for 52 patients who underwent [99mTc]Tc-sestamibi/[99mTc]NaTcO4 SPECT. Semiautomatic image subtraction identified the correct lesion quadrant in 46 patients (88%), the correct side of the neck in 51 patients (98%), and true pathologic lesions regardless of location in 51 patients (98%). Visual interactive analysis identified the correct lesion quadrant in 44 patients (85%), correct side of the neck in 49 patients (94%), and true pathologic lesions regardless of location in 50 patients (96%). There was no significant difference between the results of the 2 methods (P > 0.05). The factors supporting lesion detection were accurate positioning of the patient on the camera table, which facilitated subsequent image registration of the neck, and, after excluding ectopic parathyroid glands, focusing detection on the thyroid ROI. Conclusion: The results of semiautomatic subtraction of [99mTc]Tc-sestamibi/[99mTc]NaTcO4 SPECT using PCA had good agreement with the findings from surgery as well as the visual interactive method, comparable to the high diagnostic accuracy of [99mTc]Tc-sestamibi/[123I]NaI subtraction scintigraphy and [18F]fluorocholine PET/CT reported in the literature. The main advantages of semiautomatic subtraction are minimum user interaction and automatic adjustment of the subtraction weight. Principal component images may serve as optimized input objects, potentially useful in machine-learning algorithms aimed at fully automated detection of hyperfunctioning parathyroid glands.
- Research Article
- 10.1109/mcg.2025.3581560
- Sep 1, 2025
- IEEE computer graphics and applications
- Chi Zhang + 5 more
Circular genome visualizations are essential for exploring structural variants and gene regulation. However, existing tools often require complex scripting and manual configuration, making the process time-consuming, error-prone, and difficult to learn. To address these challenges, we introduce AuraGenome, a large language model (LLM)-powered framework for rapid, reusable, and scalable generation of multilayered circular genome visualizations. AuraGenome combines a semantic-driven multiagent workflow with an interactive visual analytics system. The workflow employs seven specialized LLM-driven agents, each assigned distinct roles, such as intent recognition, layout planning, and code generation, to transform raw genomic data into tailored visualizations. The system supports multiple coordinated views tailored for genomic data, offering ring, radial, and chord-based layouts to represent multilayered circular genome visualizations. In addition to enabling interactions and configuration reuse, the system supports real-time refinement and high-quality report export. We validate its effectiveness through two case studies and a comprehensive user study. AuraGenome is available at https://github.com/Darius18/AuraGenome.
- Research Article
1
- 10.1109/tvcg.2024.3433001
- Sep 1, 2025
- IEEE transactions on visualization and computer graphics
- Marina Evers + 3 more
Sensitivity analyses of simulation ensembles determine how simulation parameters influence the simulation's outcome. Commonly, one global numerical sensitivity value is computed per simulation parameter. However, when considering 3D spatial simulations, the analysis of localized sensitivities in different spatial regions is of importance in many applications. For analyzing the spatial variation of parameter sensitivity, one needs to compute a spatial sensitivity scalar field per simulation parameter. Given $n$n simulation parameters, we obtain multi-field data consisting of $n$n scalar fields when considering all simulation parameters. We propose an interactive visual analytics solution to analyze the multi-field sensitivity data. It supports the investigation of how strongly and in what way individual parameters influence the simulation outcome, in which spatial regions this is happening, and what the interplay of the simulation parameters is. Its central component is an overview visualization of all sensitivity fields that avoids 3D occlusions by linearizing the data using an adapted scheme of data-driven space-filling curves. The spatial sensitivity values are visualized in a combination of a Horizon Graph and a line chart. We validate our approach by applying it to synthetic and real-world ensemble data.
- Research Article
- 10.3390/buildings15162929
- Aug 18, 2025
- Buildings
- Hafiz Muhammad Shakeel + 4 more
Conventional methods of studying houses’ Energy Performance Certificates (EPCs) typically fail to investigate the impact of interrelated contextual elements instead fixating exclusively on the specific attributes of individual houses. This study presents a new method that combines network graph analytics (NGA) with interactive visual analytics to investigate hidden linkages at the individual house level. Our proposed platform collects and analyses data related to housing attributes, creates a network based on the links between these attributes, and employs sophisticated graph algorithms to provide visual representations. Users have the ability to dynamically choose postcodes, metrics, and attributes, which, in turn, generate layouts of networks that provide valuable insights. The visualisation utilises colour gradients and node metrics to improve the comprehensibility of energy performance areas. The platform enables homeowners and stakeholders to comprehend the interrelationships between aspects such as neighbouring housing features, and house infrastructure. The results prove the efficacy of the strategy by giving a collection of case studies that encompass various Energy Performance Certificates (EPCs) ranging from A to G. Each case study demonstrates the evolution of network architectures and visual assessments, showcasing the energy performance linked to certain EPC ratings. The platform offers a user-friendly interface for stakeholders to investigate and understand attribute relationships.
- Research Article
- 10.1002/ente.202500805
- Aug 5, 2025
- Energy Technology
- Zhaoyi Liu + 6 more
This paper proposes a shale gas Evaluation of ultimate recovery (EUR) prediction framework based on multialgorithm integration optimization and interactive effect visualization analysis. By integrating XGBoost, CatBoost, and LightGBM algorithms and introducing Bayesian hyperparameter optimization technology, an EUR prediction model for the Luzhou block is constructed. Compared with traditional single‐model methods, this framework achieves the following for the first time: 1) a multialgorithm collaborative parameter tuning mechanism based on Bayesian optimization and 2) visualization analysis of the interaction effects of geological and engineering parameters based on interpretability techniques. The results show that, under conditions of limited well data, the LightGBM algorithm optimized by Bayesian methods outperforms other algorithms in predicting EUR, with an error rate of only 13.7%. Through marginal effect analysis of single‐factor and two‐factor interactions, the study investigated the influence of feature values on EUR contributions and defined the optimal range of feature parameters for achieving higher EUR within the block. This research provides a new paradigm for integrated optimization of shale gas geological engineering.
- Research Article
- 10.1101/2025.07.19.665696
- Jul 28, 2025
- bioRxiv
- Simon Warchol + 7 more
Dimensionality reduction techniques help analysts make sense of complex, high-dimensional spatial datasets, such as multiplexed tissue imaging, satellite imagery, and astronomical observations, by projecting data attributes into a two-dimensional space. However, these techniques typically abstract away crucial spatial, positional, and morphological contexts, complicating interpretation and limiting insights. To address these limitations, we present SEAL, an interactive visual analytics system designed to bridge the gap between abstract 2D embeddings and their rich spatial imaging context. SEAL introduces a novel hybrid-embedding visualization that preserves image and morphological information while integrating critical high-dimensional feature data. By adapting set visualization methods, SEAL allows analysts to identify, visualize, and compare selections—defined manually or algorithmically—in both the embedding and original spatial views, facilitating a deeper understanding of the spatial arrangement and morphological characteristics of entities of interest. To elucidate differences between selected sets of items, SEAL employs a scalable surrogate model to calculate feature importance scores, identifying the most influential features governing the position of objects within embeddings. These importance scores are visually summarized across selections, with mathematical set operations enabling detailed comparative analyses. We demonstrate SEAL’s effectiveness and versatility through three case studies: colorectal cancer tissue analysis with a pharmacologist, melanoma investigation with a cell biologist, and exploration of sky survey data with an astronomer. These studies underscore the importance of integrating image context into embedding spaces when interpreting complex imaging datasets. Implemented as a standalone tool while also integrating seamlessly with computational notebooks, SEAL provides an interactive platform for spatially informed exploration of high-dimensional datasets, significantly enhancing interpretability and insight generation.
- Research Article
- 10.46630/phm.17.2025.45
- Jul 18, 2025
- PHILOLOGIA MEDIANA
- Georgina Frei
The integration of literary texts into foreign language teaching offers significant ped- agogical benefits but also poses considerable challenges, particularly in adapting complex con- tent to learners’ language proficiency levels. The emergence of LLMs presents innovative pos- sibilities for addressing these challenges through AI-driven text processing and instructional support. This paper explores the potential of AI tools in the didacticization of literary works by conducting a systematic literature review of AI-based approaches to working with literary texts. The study examines various types of prompts and their anticipated educational outcomes, synthesizing findings to identify best practices in applying AI tools within educational contexts. Additionally, methods suggested in the literature without specific prompts were reformulated into actionable, research-based prompts for practical use. The findings highlight how AI tech- nologies can enhance educational experiences through functionalities such as text summari- zation, simplification, interactive scenario creation, content visualization, and literary analysis, thereby enabling differentiated learning tailored to diverse student needs. Despite these benefits, the study underscores critical challenges, including the risks of over-reliance on AI-generated content, potential inaccuracies and cultural insensitivity. Ethical considerations, such as the pro- tection of intellectual property and the need to maintain academic integrity, are also discussed. The paper concludes by advocating for further research on the practical application of prompts in classroom settings, emphasizing the importance of studies to assess their impact on student engagement, motivation, and critical thinking.
- Research Article
- 10.47772/ijriss.2025.906000174
- Jul 5, 2025
- International Journal of Research and Innovation in Social Science
- Muhamad Dody Firmansyah + 1 more
In the digital era, news portals have become essential platforms for delivering timely news and information across diverse topics such as politics, business, technology, and public affairs. These platforms, which may be operated by independent media, private corporations, or government institutions. It also generates extensive user engagement data including metrics like page views, reading duration, and user interactions. Despite the abundance of this data, many editorial teams lack the analytical tools necessary to extract actionable insights to guide content strategy and audience engagement. This study explores the application of Tableau, a data visualization tool, to interpret and present user interaction data from the news portal of the Indonesian National Police. As a public sector platform, the portal plays a strategic role in government communication, transparency, and citizen engagement. Through the development of interactive dashboards and visual analytics, this research aims to support data-driven editorial decision-making by providing intuitive insights into content performance and user behaviour. Data extracted from SQL-based systems was processed using Tableau to create visualizations such as bar charts, pie charts, and trend lines depicting content uploads by category, type, and region over time. These elements were integrated into an interactive dashboard that offers editorial teams intuitive insights into content trends. To evaluate system usability, the study employed the System Usability Scale (SUS), a ten-item questionnaire used to assess the dashboard’s effectiveness and ease of use. The final dashboard provided actionable recommendations to enhance content strategy and resource allocation across different work units and regions. This research highlights how visual analytics can significantly improve public sector communication and editorial planning through user-centric, data-driven approaches.
- Research Article
- 10.1016/j.visinf.2025.100260
- Jul 1, 2025
- Visual Informatics
- Zichen Cheng + 4 more
Interactive simulation and visual analysis of social media event dynamics with LLM-based multi-agent modeling
- Research Article
- 10.1109/tvcg.2024.3394745
- Jul 1, 2025
- IEEE transactions on visualization and computer graphics
- Longfei Chen + 9 more
The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.
- Research Article
- 10.1177/14738716251342474
- Jun 28, 2025
- Information Visualization
- Haijun Yu + 1 more
Hyperspectral images (HSIs) have become increasingly prominent as they can maintain the subtle spectral differences of the imaged objects. Designing approaches and tools for analyzing HSIs presents a unique set of challenges due to their high-dimensional characteristics. Given the problems existing in the current visual analysis methods of HSIs, such as insufficient guidance and difficulty in achieving an accurate selection of specific spectral pixels, a universal interactive visual analysis approach is proposed in this article, which enables observers to visually interpret the rich information contained in HSIs with guidance, and pertinence through a graphical interface. The selection of the region of interest can realize the interactive screening from the spatial dimension of HSIs. Three information indicators are used to guide observers to select bands effectively. The clustering calculation and its scatter plot play an important guiding role in the selection and interpretation of feature classes for observers. Aiming at the precise selection of specific spectral pixels, a parallel coordinate method with reordering calculation of spectral bands is proposed to make it easier to distinguish spectral data curves and improve the clarity of target class expression. Finally, the usability and effectiveness of the proposed approach are analyzed through experiments.