Intrusion detection algorithm based on multi-scale feature fusion

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Intrusion detection algorithm based on multi-scale feature fusion

Similar Papers
  • Research Article
  • 10.1016/j.brainresbull.2022.03.007
Many heads are better than one: A multiscale neural information feature fusion framework for spatial route selections decoding from multichannel neural recordings of pigeons
  • Mar 12, 2022
  • Brain Research Bulletin
  • Mengmeng Li + 4 more

Many heads are better than one: A multiscale neural information feature fusion framework for spatial route selections decoding from multichannel neural recordings of pigeons

  • Research Article
  • 10.1016/j.neuroscience.2025.07.020
A cross-subject MDD detection approach based on multiscale nonlinear analysis in resting state EEG.
  • Aug 1, 2025
  • Neuroscience
  • Zhen Zhang + 7 more

A cross-subject MDD detection approach based on multiscale nonlinear analysis in resting state EEG.

  • Conference Article
  • 10.1109/iciscae55891.2022.9927534
Image Classification of Biota Specimens Based On Multi-Scale Bilinear Feature Fusion Model
  • Sep 23, 2022
  • Wang Yifan + 3 more

Objective Biological image detection and classification is a classic and challenging task at present. In order to accurately and automatically identify biological species and ownership relationship, a new classification model and tree structure are proposed to improve the performance of biological detection or classification. Methods Two groups of experiments were set up. Using YOLOv5 as the first layer of the tree structure, YOLOv5 is easy to deploy and can roughly locate the image and remove noise information beyond the target. Then the first group of experiments were trained with RESNET-101, RESNET-200 and DenSenet-201 at different levels of biological tree structure, respectively, and the results were compared with those directly using YOLOv5. In the second group of experiments, the new multi-scale bilinear feature fusion classification model is used to classify images successively, and finally obtain the accurate classification of objects. Result In the first group of experiments, the best result of the customized data set of 80 insect species was 1.09% higher than the mAP value of YOLOv5x. In the second group of experiments, the accuracy of cuB-200-2011(Caltech- UCSDbirds-200-2011) data was 86.4%. The accuracy is 2.3% higher than that of B-CNN classification model, which verifies the effectiveness of the improved method and model in this paper. Conclusion More complementary information can be obtained by multi-scale feature fusion. Bilinear fusion of features at different scales can fully express features at different scales and improve the accuracy of classification tasks. The tree structure effectively removes irrelevant noise information, and at the same time classifies layer by layer, simplifies the classification task, enriches the feature information, and makes the result more accurate

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1155/2021/5113151
Forecasting Variation Trends of Stocks via Multiscale Feature Fusion and Long Short-Term Memory Learning
  • Sep 21, 2021
  • Scientific Programming
  • Yezhen Liu + 3 more

Forecasting stock price trends accurately appears a huge challenge because the environment of stock markets is extremely stochastic and complicated. This challenge persistently motivates us to seek reliable pathways to guide stock trading. While the Long Short-Term Memory (LSTM) network has the dedicated gate structure quite suitable for the prediction based on contextual features, we propose a novel LSTM-based model. Also, we devise a multiscale convolutional feature fusion mechanism for the model to extensively exploit the contextual relationships hidden in consecutive time steps. The significance of our designed scheme is twofold. (1) Benefiting from the gate structure designed for both long- and short-term memories, our model can use the given stock history data more adaptively than traditional models, which greatly guarantees the prediction performance in financial time series (FTS) scenarios and thus profits the prediction of stock trends. (2) The multiscale convolutional feature fusion mechanism can diversify the feature representation and more extensively capture the FTS feature essence than traditional models, which fairly facilitates the generalizability. Empirical studies conducted on three classic stock history data sets, i.e., S&P 500, DJIA, and VIX, demonstrated the effectiveness and stability superiority of the suggested method against a few state-of-the-art models using multiple validity indices. For example, our method achieved the highest average directional accuracy (around 0.71) on the three employed stock data sets.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.1186/s40494-024-01172-x
Ancient mural segmentation based on multiscale feature fusion and dual attention enhancement
  • Feb 16, 2024
  • Heritage Science
  • Jianfang Cao + 5 more

To address the fuzzy segmentation boundaries, missing details, small target losses and low efficiency of traditional segmentation methods in ancient mural image segmentation scenarios, this paper proposes a mural segmentation model based on multiscale feature fusion and a dual attention-augmented segmentation model (MFAM). The model uses the MobileViT network, which integrates a coordinate attention mechanism, as the feature extraction backbone network. It attains global and local expression capabilities through self-attention, class convolution, and coordinate attention and focuses on location information to expand the receptive field and achieve improved feature extraction efficiency. An A_R_ASPP feature enhancement module is proposed for the attention-optimized residual atrous spatial pyramid pooling module. The module uses residual connections to solve the small target loss problem in murals caused by the excessive sampling rate of atrous convolution and uses a feature attention mechanism to adaptively adjust the feature map weight according to the channel importance levels. A dual attention-enhanced feature fusion module is proposed for multiscale decoder feature fusion to improve the mural segmentation effect. This module uses a cross-level aggregation strategy and an attention mechanism to weight the importance of different feature levels to obtain multilevel semantic feature representations. The model improves the mean intersection over union (MIoU) by 3.06% and the MPA by 1.81% on a mural dataset compared with other models. The model is proven to be effective at improving the segmentation details, efficiency and small target segmentation results produced for mural images, and a new method is proposed for segmenting ancient mural images.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/ccdc55256.2022.10033926
RGB-D Saliency Detection based on Cross-Modal and Multi-scale Feature Fusion
  • Aug 15, 2022
  • Xuxing Zhu + 2 more

The mainstream algorithm brings noise from the depth map, and the high-level features are diluted in the fusion process. To address these problems, RGB-D saliency detection based on Cross-Modal and Multi-Scale Feature Fusion is proposed. In this paper, we propose Cross-Modal Feature Fusion Module(CMFFM) to fuse the RGB image with semantic feature advantages and the depth map with position feature advantages. Based on this, CMFFM effectively suppress the interference of noise in the depth map. Furthermore, Multi-Scale Residual Channel Attention Feature Fusion Module(MRCAFFM) is proposed to fuse the high-level and low-level features level-by-level, which enriches the expression of high-level semantic features and enhances the capability of feature selection. Finally, experimental results on four benchmark datasets show that the comprehensive performance of the algorithm is better than the compared algorithms.

  • Research Article
  • Cite Count Icon 18
  • 10.1117/1.jmi.9.5.052402
Efficient multiscale fully convolutional UNet model for segmentation of 3D lung nodule from CT image.
  • May 11, 2022
  • Journal of Medical Imaging
  • Sundaresan A Agnes + 1 more

Purpose: Segmentation of lung nodules in chest CT images is essential for image-driven lung cancer diagnosis and follow-up treatment planning. Manual segmentation of lung nodules is subjective because the approach depends on the knowledge and experience of the specialist. We proposed a multiscale fully convolutional three-dimensional UNet (MF-3D UNet) model for automatic segmentation of lung nodules in CT images. Approach: The proposed model employs two strategies, fusion of multiscale features with Maxout aggregation and trainable downsampling, to improve the performance of nodule segmentation in 3D CT images. The fusion of multiscale (fine and coarse) features with the Maxout function allows the model to retain the most important features while suppressing the low-contribution features. The trainable downsampling process is used instead of fixed pooling-based downsampling. Results: The performance of the proposed MF-3D UNet model is examined by evaluating the model with CT scans obtained from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset. A quantitative and visual comparative analysis of the proposed work with various customized UNet models is also presented. The comparative analysis shows that the proposed model yields reliable segmentation results compared with other methods. The experimental result of 3D MF-UNet shows encouraging results in the segmentation of different types of nodules, including juxta-pleural, solitary pulmonary, and non-solid nodules, with an average Dice similarity coefficient of , and it outperforms other CNN-based segmentation models. Conclusions: The proposed model accurately segments the nodules using multiscale feature aggregation and trainable downsampling approaches. Also, 3D operations enable precise segmentation of complex nodules using inter-slice connections.

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.compag.2024.109185
A multi-scale semantic feature fusion method for remote sensing crop classification
  • Jun 24, 2024
  • Computers and Electronics in Agriculture
  • Xizhi Huang + 2 more

A multi-scale semantic feature fusion method for remote sensing crop classification

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 22
  • 10.3390/rs16050907
Object Detection in Remote Sensing Images Based on Adaptive Multi-Scale Feature Fusion Method
  • Mar 4, 2024
  • Remote Sensing
  • Chun Liu + 3 more

Multi-scale object detection is critical for analyzing remote sensing images. Traditional feature pyramid networks, which are aimed at accommodating objects of varying sizes through multi-level feature extraction, face significant challenges due to the diverse scale variations present in remote sensing images. This situation often forces single-level features to span a broad spectrum of object sizes, complicating accurate localization and classification. To tackle these challenges, this paper proposes an innovative algorithm that incorporates an adaptive multi-scale feature enhancement and fusion module (ASEM), which enhances remote sensing image object detection through sophisticated multi-scale feature fusion. Our method begins by employing a feature pyramid to gather coarse multi-scale features. Subsequently, it integrates a fine-grained feature extraction module at each level, utilizing atrous convolutions with varied dilation rates to refine multi-scale features, which markedly improves the information capture from widely varied object scales. Furthermore, an adaptive enhancement module is applied to the features of each level by employing an attention mechanism for feature fusion. This strategy concentrates on the features of critical scale, which significantly enhance the effectiveness of capturing essential feature information. Compared with the baseline method, namely, Rotated FasterRCNN, our method achieved an mAP of 74.21% ( 0.81%) on the DOTA-v1.0 dataset and an mAP of 84.90% (+9.2%) on the HRSC2016 dataset. These results validated the effectiveness and practicality of our method and demonstrated its significant application value in multi-scale remote sensing object detection tasks.

  • Research Article
  • Cite Count Icon 28
  • 10.1016/j.bspc.2022.104305
LiM-Net: Lightweight multi-level multiscale network with deep residual learning for automatic liver segmentation in CT images
  • Oct 21, 2022
  • Biomedical Signal Processing and Control
  • Devidas T Kushnure + 2 more

LiM-Net: Lightweight multi-level multiscale network with deep residual learning for automatic liver segmentation in CT images

  • Research Article
  • 10.3390/app15084595
A Multi-Scale Feature Fusion Hybrid Convolution Attention Model for Birdsong Recognition
  • Apr 21, 2025
  • Applied Sciences
  • Lianglian Gu + 6 more

Birdsong is a valuable indicator of rich biodiversity and ecological significance. Although feature extraction has demonstrated satisfactory performance in classification, single-scale feature extraction methods may not fully capture the complexity of birdsong, potentially leading to suboptimal classification outcomes. The integration of multi-scale feature extraction and fusion enables the model to better handle scale variations, thereby enhancing its adaptability across different scales. To address this issue, we propose a multi-scale hybrid convolutional attention mechanism model (MUSCA). This method combines depthwise separable convolution and traditional convolution for feature extraction and incorporates self-attention and spatial attention mechanisms to refine spatial and channel features, thereby improving the effectiveness of multi-scale feature extraction. To further enhance multi-scale feature fusion, a layer-by-layer alignment feature fusion method is developed to establish a deeper correlation, thereby improving classification accuracy and robustness. Using the above method, we identified 20 bird species on three spectrograms, wavelet spectrogram, log-Mel spectrogram and log-spectrogram, with recognition rates of 93.79%, 96.97% and 95.44%, respectively. Compared with the resnet18 model, it increased by 3.26%, 1.88% and 3.09%, respectively. The results indicate that the MUSCA method proposed in this paper is competitive compared to recent and state-of-the-art methods.

  • Research Article
  • Cite Count Icon 24
  • 10.1016/j.jvcir.2023.103981
Effective image tampering localization with multi-scale ConvNeXt feature fusion
  • Nov 11, 2023
  • Journal of Visual Communication and Image Representation
  • Haochen Zhu + 4 more

Effective image tampering localization with multi-scale ConvNeXt feature fusion

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/globecom46510.2021.9685805
CMF Net: Detecting Objects in Infrared Traffic Image with Combination of Multiscale Features
  • Dec 1, 2021
  • Zhifang Liao + 3 more

Infrared image target detection has always been a hot topic of research, but there is still little research on infrared image target detection in the field of transportation. In this paper, we use the idea of transfer learning to transfer the target detection framework in the visible domain of deep learning to the infrared domain, and propose the target detection model CMF Net based on multi-scale feature fusion. CMF Net uses two multi-scale feature extraction mechanisms and features fusion, so that the final output feature map of the backbone network contains not only low-level visual features which are beneficial to target localization, but also high-level semantic features which are beneficial to target recognition, and can adapt to multi-scale features of the target. The experiment verified the advantages of CMF Net, and its mAP on the test data of the infrared image data set FLIR reached about 71%. This result is an increase of about 13% compared to Faster R-CNN, an increase of about 6% compared to YOLO3, and an increase of about 17% compared to SSD.

  • Research Article
  • Cite Count Icon 1
  • 10.3390/rs17081390
Remote Sensing Image Segmentation Using Vision Mamba and Multi-Scale Multi-Frequency Feature Fusion
  • Apr 14, 2025
  • Remote Sensing
  • Yice Cao + 4 more

Rapid advancements in remote sensing (RS) imaging technology have heightened the demand for the precise and efficient interpretation of large-scale, high-resolution RS images. Although segmentation algorithms based on convolutional neural networks (CNNs) or Transformers have achieved significant performance improvements, the trade-off between segmentation precision and computational complexity remains a key limitation for practical applications. Therefore, this paper proposes CVMH-UNet—a hybrid semantic segmentation network that integrates the Vision Mamba (VMamba) framework with multi-scale feature fusion—to achieve high-precision and relatively efficient RS image segmentation. CVMH-UNet comprises the following two core modules: the hybrid visual state space block (HVSSBlock) and the multi-frequency multi-scale feature fusion block (MFMSBlock). The HVSSBlock integrates convolutional branches to enhance local feature extraction while employing a cross 2D scanning method (CS2D) to capture global information from multiple directions, enabling the synergistic modeling of global and local features. The MFMSBlock introduces multi-frequency information via 2D Discrete Cosine Transform (2D DCT) and extracts multi-scale local details through point-wise convolution, thereby optimizing refined feature fusion in skip connections between the encoder and decoder. Experimental results on benchmark RS datasets demonstrate that CVMH-UNet achieves state-of-the-art segmentation accuracy with optimal computational efficiency, surpassing existing advanced methods.

  • Research Article
  • Cite Count Icon 96
  • 10.1109/tnnls.2021.3062070
A Deeply Supervised Convolutional Neural Network for Pavement Crack Detection With Multiscale Feature Fusion.
  • Mar 15, 2021
  • IEEE transactions on neural networks and learning systems
  • Zhong Qu + 3 more

Automatic crack detection is vital for efficient and economical road maintenance. With the explosive development of convolutional neural networks (CNNs), recent crack detection methods are mostly based on CNNs. In this article, we propose a deeply supervised convolutional neural network for crack detection via a novel multiscale convolutional feature fusion module. Within this multiscale feature fusion module, the high-level features are introduced directly into the low-level features at different convolutional stages. Besides, deep supervision provides integrated direct supervision for convolutional feature fusion, which is helpful to improve model convergency and final performance of crack detection. Multiscale convolutional features learned at different convolution stages are fused together to robustly represent cracks, whose geometric structures are complicated and hardly captured by single-scale features. To demonstrate its superiority and generalizability, we evaluate the proposed network on three public crack data sets, respectively. Sufficient experimental results demonstrate that our method outperforms other state-of-the-art crack detection, edge detection, and image segmentation methods in terms of F1-score and mean IU.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.