Abstract

To address the challenges associated with insufficient feature extraction and gradient degradation encountered when dealing with deepening network structures in image classification tasks, this paper presents a ResGMEANet(Residual Group Multi-scale Enhanced Attention Network). The model introduces a multi-scale attention enhancement module. This design draws inspiration from the original model's capability to independently capture feature correlations in channels and spaces. By implementing shuffle operations and feature transformations within the group, our method expands the receptive field through the utilization of multiple convolution kernels. Additionally, we incorporate an improved tensor synthesis attention, building upon the traditional convolution attention, to derive attention feature maps after feature enhancement. Evaluation on the CIFAR-10 and CIFAR-100 datasets shows that ResGMEANet outperforms both the original backbone model and several existing mainstream methods in classification accuracy. This work aims to provide a new perspective for the future by combining residual neural networks with different attention mechanisms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.