Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Convolutional Neural Network Features
  • Convolutional Neural Network Features
  • Deep Features
  • Deep Features
  • Multi-level Features
  • Multi-level Features
  • Convolutional Features
  • Convolutional Features

Articles published on Feature Fusion

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
20720 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1016/j.saa.2025.126955
Advancing quinoa(Chenopodium quinoa Willd.) quality assessment using hyperspectral imaging.
  • Feb 5, 2026
  • Spectrochimica acta. Part A, Molecular and biomolecular spectroscopy
  • Xiaojiang Wang + 3 more

Advancing quinoa(Chenopodium quinoa Willd.) quality assessment using hyperspectral imaging.

  • New
  • Research Article
  • 10.1142/s0218001426590135
LPGANet: A High-Precision Lightweight Power Grid Anomaly Detection System for Complex Environment
  • Feb 4, 2026
  • International Journal of Pattern Recognition and Artificial Intelligence
  • Junwei Li + 5 more

The fault of the power grid transmission line itself or the foreign matter caught in it will pose a potential threat to the power system. Efficient anomaly detection is the key to maintain the stability of modern transmission systems. At present, the increasing demand for edge computing equipment makes it a trend to develop lightweight and efficient power grid anomaly detection methods. To deal with the practical demands of power grid anomaly detection, this paper brings the LPGANet, a lightweight model designed to achieve high accuracy while enhancing detection efficiency. The model is integrated with dynamic snake convolution (DSConv) and spatial-channel reconstruction convolution (SCConv) to increase multi-scale feature extraction and fusion and cut computational cost. In addition, An EMA method is adopted to enhance the concentration on foreground and reduce the impact of background. We release a new dataset containing 6,200 images of typical power grid anomalies such as broken strands, scattered strands, and other floating suspensions. The experimental results on the dataset demonstrate that LPGANet achieves the best accuracy, highest efficiency, and best comprehensive performance compared with other object detection methods. In addition, the effectiveness of the system under computational resource limitation is also verified in the deployment on Jetson AGX Orin edge devices.

  • New
  • Research Article
  • 10.1109/tmi.2026.3660361
From contrast-driven segmentation to central lumbar spinal stenosis grading: a comprehensive multi-view spinal MRI image analysis.
  • Feb 3, 2026
  • IEEE transactions on medical imaging
  • Zhengchao Zhou + 5 more

Central lumbar spinal stenosis, a prevalent degenerative spinal disorder, severely impacts the quality of life for those affected. Axial and sagittal MRI images offer diverse information on tissue structure and lesions, which is crucial for accurate diagnosis. However, MRI-based diagnostic approaches still have poor lesion localization, insufficient cross-view alignment, underutilization of multi-view MRI information, and limited generalization across patient variability. To address these problems, we proposed an Encompassing Lumbar Central Spinal Stenosis Grading Model via Multi-view MRI Image Fusion called ELSG-MF. ELSG-MF consists of three stages: the first stage utilizes the extraction of robust pseudo-labels through a contrast-driven consistency reinforcement technique to guide Med-SAM in localizing and segmenting spinal tissue components. The Sagittal-Axial Pairing (SAP) Algorithm was developed by stage2 to integrate the spatial anatomical relationship between the vertebral body and the intervertebral disc, facilitating the correlation pairing between sagittal and axial images. Stage3 subsequently innovated the multi-view Adaptive Fusion (M²AF) module, which enables adaptive dynamic fusion of anatomical features across views. M²AF enhances the extraction of contextual complementary information, and significantly improves the model's capacity to detect subtle variations in the degree of narrowness. A series of studies show that our model achieves an overall accuracy of 0.8631, AUC of 0.96, and F1-score of 0.8614. These results indicate that our model substantially outperforms mainstream approaches, attaining superior segmentation and grading accuracy, exhibiting robust generalization and clinical application potential.

  • New
  • Research Article
  • 10.1177/10668969251410957
Androgen Receptor Positive EWSR1::FEV-Rearranged Prostatic Ewing Sarcoma Mimicking High-Grade Neuroendocrine Carcinoma.
  • Feb 3, 2026
  • International journal of surgical pathology
  • Tanisha Martheswaran + 3 more

Ewing sarcoma (ES) is a rare aggressive neoplasm that is the second most common primary bone tumor of childhood and adolescence, with less frequent extraskeletal presentations. ES with EWSR1::FEV translocation is extremely rare and is characterized by extraskeletal location, varying morphology and immunophenotype, and an aggressive clinical course. We present a prostatic ES confirmed by EWSR1::FEV fusion, detailing its clinical presentation, histopathologic and immunophenotypic features, molecular profile, and management. A man in his mid-50s presented with urinary frequency and difficulty voiding. Imaging revealed a 4.4 cm prostatic mass with bladder invasion and right iliac lymphadenopathy. Serum PSA was within normal limits. Biopsy demonstrated a poorly differentiated epithelioid neoplasm with neuroendocrine features. Immunohistochemistry showed strong expression of keratins AE1/3 and CAM5.2, chromogranin, synaptophysin, NKX2.2, and CD99 (weak), while PSA was negative. NKX3.1 was focally positive in rare tumor cells and Ki67 was approximately 35%. Perineural invasion and intraductal spread were noted. The tumor was initially interpreted as poorly differentiated carcinoma with neuroendocrine features. The patient underwent radical prostatectomy, revealing a 5.5 cm tumor with perineural and lymphovascular invasion, and nodal metastasis. Next-generation sequencing confirmed an EWSR1::FEV fusion, establishing the diagnosis of ES. Immunostain for androgen receptor was strongly and diffusely positive in the primary tumor and in the nodal metastasis, which together with focal staining for NKX3.1 were suggestive of primary prostatic origin and invited consideration of androgen deprivation therapy. This report highlights a rare prostatic Ewing-family sarcoma harboring an EWSR1::FEV fusion and immunophenotypic features that mimic a neuroendocrine carcinoma.

  • New
  • Research Article
  • 10.1088/2057-1976/ae4108
CCE-Net: A Lightweight Context Contrast Enhancement Network and Its Application in Medical Image Segmentation.
  • Feb 3, 2026
  • Biomedical physics & engineering express
  • Xiaojing Hou + 1 more

Efficient and accurate image segmentation models play a vital role in medical image segmentation, however, high computational cost of traditional models limits clinical deployment. Based on pyramid visual transformers and convolutional neural networks, this paper proposes a lightweight Context Contrast Enhancement Network (CCE-Net) that ensures efficient inference and achieves accurate segmentation through the contextual feature synergy mechanism and feature contrast enhancement strategy. The Local Context Fusion Enhancement module is designed to obtain more specific local detail information through cross-layer context fusion and bridge the semantic gap between the encoder and decoder. The Deep Feature Multi-scale Extraction module is proposed to fully extract the comprehensive information about the deepest features in the bottleneck layer of the model and provide more accurate global contextual features for the decoder. The Detail Contrast Enhancement Decoder module is designed to effectively solve the inherent problems of missing image details and blurred edges through adaptive dual-branch feature fusion and frequency-domain contrast enhancement operations. Experiments show that CCE-Net only requires 5.40M parameters and 0.80G FLOPs, and the average Dice coefficients on the Synapse and ACDC datasets are 82.25% and 91.88%, respectively, which are 37%-62% less than the parameters of mainstream models, promoting the transformation of lightweight medical AI models from laboratory research to clinical practice.

  • New
  • Research Article
  • 10.3390/app16031551
An Application Study on Digital Image Classification and Recognition of Yunnan Jiama Based on a YOLO-GAM Deep Learning Framework
  • Feb 3, 2026
  • Applied Sciences
  • Nan Ji + 2 more

Yunnan Jiama (paper horse prints), a representative form of intangible cultural heritage in southwest China, is characterized by subtle inter-class differences, complex woodblock textures, and heterogeneous preservation conditions, which collectively pose significant challenges for digital preservation and automatic image classification. To address these challenges and improve the computational analysis of Jiama images, this study proposes an enhanced object detection framework based on YOLOv8 integrated with a Global Attention Mechanism (GAM), referred to as YOLOv8-GAM. In the proposed framework, the GAM module is embedded into the high-level semantic feature extraction and multi-scale feature fusion stages of YOLOv8, thereby strengthening global channel–spatial interactions and improving the representation of discriminative cultural visual features. In addition, image augmentation strategies, including brightness adjustment, salt-and-pepper noise, and Gaussian noise, are employed to simulate real-world image acquisition and degradation conditions, which enhances the robustness of the model. Experiments conducted on a manually annotated Yunnan Jiama image dataset demonstrate that the proposed model achieves a mean average precision (mAP) of 96.5% at an IoU threshold of 0.5 and 82.13% under the mAP@0.5:0.95 metric, with an F1-score of 94.0%, outperforming the baseline YOLOv8 model. These results indicate that incorporating global attention mechanisms into object detection networks can effectively enhance fine-grained classification performance for traditional folk print images, thereby providing a practical and scalable technical solution for the digital preservation and computational analysis of intangible cultural heritage.

  • New
  • Research Article
  • 10.7717/peerj-cs.3536
Nocturnal non-speech sound classification with multi-spectrogram feature fusion and an attention-based stacked hybrid convolutional bidirectional long short-term memory–vision transformer architecture
  • Feb 2, 2026
  • PeerJ Computer Science
  • Ensar Arif Sağbaş

Nocturnal non-speech sounds encapsulate critical physiological and behavioral information, making them a valuable modality for non-invasive assessment of sleep quality. Despite this potential, existing approaches predominantly rely on single-view spectral features or shallow learning architectures, limiting their ability to generalize across diverse acoustic patterns. To overcome these limitations, this study proposes a hybrid deep learning architecture tailored for the classification of seven distinct nocturnal sound categories. The system employs a tri-branch design that independently processes Mel-frequency cepstral coefficients (MFCC), Mel-spectrogram, and constant-Q transform (CQT)-spectrogram representations. Each branch passes through a dedicated pipeline comprising convolutional neural networks (CNN), bidirectional long short-term memory (BiLSTM) layers, and attention-equipped vision transformers (ViT). This configuration facilitates hierarchical learning of local, temporal, and global contextual features. The softmax outputs of each branch are fused using a stacking ensemble strategy, with an XGBoost-based meta-classifier performing the final decision integration. A complementary weighted ensemble is also implemented for comparative evaluation. Experimental results on a publicly available seven-class non-speech sound dataset demonstrate the proposed model’s outstanding performance, achieving 99.71% accuracy under 10-fold cross-validation, along with consistently high precision, recall, and F1-scores across all classes. Comparative benchmarks show substantial improvements over existing state-of-the-art models, including CNNs, long short-term memory (LSTM) variants, classical machine learning approaches, and metaheuristic-based ensembles. Supporting analyses such as confidence score distributions and dimensionality reduction visualizations (principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE)) further validate the model’s robustness and discriminative power. These findings highlight the effectiveness of integrating multi-spectral representations, deep hierarchical modeling, and ensemble strategies for high-fidelity nocturnal non-speech sound classification.

  • New
  • Research Article
  • 10.1080/01431161.2026.2617731
Multiscale feature fusion analysis of multibeam and side-scan sonar for seafloor sediment classification in shallow reef environments
  • Feb 2, 2026
  • International Journal of Remote Sensing
  • Jiamiao Wang + 6 more

ABSTRACT Shallow marine environments are critical transition zones between land and sea and exhibit dynamic, spatially heterogeneous seafloor sediments. Conventional single-source acoustic mapping using multibeam echosounder (MBES) or side-scan sonar (SSS) often fails to fully capture this complexity because of survey-orientation effects and instrument-related limitations. To address these challenges, this study proposes a multisource acoustic seafloor mapping framework that integrates MBES bathymetry and SSS intensity data for improved sediment classification in shallow reef environments. A spatial coregistration approach is applied to align MBES and SSS datasets by extracting common feature points and using a high-precision transformation model. A multiscale–multidirectional (MS–MD) feature extraction strategy is then developed by combining bathymetric and acoustic texture features. The Relief-F algorithm is employed to optimize feature selection and reduce redundancy. Five supervised classifiers—Random Forest (RF), K-Nearest Neighbour (KNN), Support Vector Machine (SVM), Random Under-Sampling Boosting (RUS Boost), and Broad Learning System (BLS)—are evaluated. Results show that under the MS–MD feature strategy, the RF model achieves an overall accuracy of 98.26% using fused multisource data, outperforming the MBES-only RF baseline (96.96%). Other classifiers show consistent improvements under the same strategy. In addition to accuracy gains, multisource fusion produces sediment maps with improved spatial coherence and clearer transitional boundaries. Overall, the proposed framework demonstrates the potential of multisource acoustic integration and machine-learning-based classification for high-resolution benthic habitat mapping in complex shallow-water environments.

  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.compbiolchem.2025.108674
MediFlora-Net: Quantum-enhanced deep learning for precision medicinal plant identification.
  • Feb 1, 2026
  • Computational biology and chemistry
  • Uma K V + 3 more

MediFlora-Net: Quantum-enhanced deep learning for precision medicinal plant identification.

  • New
  • Research Article
  • 10.1109/tla.2026.11369405
Recommending Move Method Refactoring Opportunities Based on Feature Fusion and Deep Learning
  • Feb 1, 2026
  • IEEE Latin America Transactions
  • Yang Zhang + 3 more

Recommending Move Method Refactoring Opportunities Based on Feature Fusion and Deep Learning

  • New
  • Research Article
  • 10.1016/j.compag.2025.111315
YMAD: An efficient poultry gender classification method based on feature fusion and YOLO model
  • Feb 1, 2026
  • Computers and Electronics in Agriculture
  • Xiaoming Zhao + 7 more

YMAD: An efficient poultry gender classification method based on feature fusion and YOLO model

  • New
  • Research Article
  • 10.1016/j.knosys.2025.114975
DeepCut++: Graph-based unsupervised segmentation with feature fusion and diffusion learning
  • Feb 1, 2026
  • Knowledge-Based Systems
  • Nazila Pourhaji Aghayengejeh + 3 more

DeepCut++: Graph-based unsupervised segmentation with feature fusion and diffusion learning

  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.compbiolchem.2025.108755
XP-GCN: Extreme learning machines and parallel graph convolutional networks for high-throughput prediction of blood-brain barrier penetration based on feature fusion.
  • Feb 1, 2026
  • Computational biology and chemistry
  • Muhammed Ali Pala

XP-GCN: Extreme learning machines and parallel graph convolutional networks for high-throughput prediction of blood-brain barrier penetration based on feature fusion.

  • New
  • Research Article
  • 10.1016/j.compbiolchem.2025.108704
GAN-based novel feature selection approach with hybrid deep learning for heartbeat classification from ECG signal.
  • Feb 1, 2026
  • Computational biology and chemistry
  • S Haseena Beegum + 1 more

GAN-based novel feature selection approach with hybrid deep learning for heartbeat classification from ECG signal.

  • New
  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.inffus.2025.103534
A data-driven digital twin model for bridge health monitoring using feature fusion and unsupervised deep learning
  • Feb 1, 2026
  • Information Fusion
  • Vahid Mousavi + 3 more

A data-driven digital twin model for bridge health monitoring using feature fusion and unsupervised deep learning

  • New
  • Research Article
  • 10.1016/j.mri.2025.110574
GL-mamba-net: A magnetic resonance imaging restoration network with global-local mamba.
  • Feb 1, 2026
  • Magnetic resonance imaging
  • Ke Liang + 5 more

GL-mamba-net: A magnetic resonance imaging restoration network with global-local mamba.

  • New
  • Research Article
  • 10.1016/j.inffus.2025.103496
Fusion of deep feature and apparent feature for flotation grade prediction based on apparent information guidance encoder–decoder network
  • Feb 1, 2026
  • Information Fusion
  • Yuming Wu + 4 more

Fusion of deep feature and apparent feature for flotation grade prediction based on apparent information guidance encoder–decoder network

  • New
  • Research Article
  • 10.1016/j.bspc.2025.108458
Manual acupuncture manipulation recognition model with a multimodal fusion of tactile and visual features
  • Feb 1, 2026
  • Biomedical Signal Processing and Control
  • Chong Su + 10 more

Manual acupuncture manipulation recognition model with a multimodal fusion of tactile and visual features

  • New
  • Research Article
  • 10.1016/j.chemolab.2026.105628
Multimodal fusion of CT features and density for rapid prediction of raw-coal ash
  • Feb 1, 2026
  • Chemometrics and Intelligent Laboratory Systems
  • Shuxian Su + 6 more

Multimodal fusion of CT features and density for rapid prediction of raw-coal ash

  • New
  • Research Article
  • 10.1016/j.csl.2025.101873
Deep feature representations and fusion strategies for speech emotion recognition from acoustic and linguistic modalities: A systematic review
  • Feb 1, 2026
  • Computer Speech & Language
  • Andrea Chaves-Villota + 4 more

Deep feature representations and fusion strategies for speech emotion recognition from acoustic and linguistic modalities: A systematic review

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers