Abstract

Attention mechanism, one of the most extensively utilized components in computer vision, can assist neural networks in emphasizing significant elements and suppressing irrelevant ones. However, the vast majority of channel attention mechanisms only contain channel feature information and ignore spatial feature information, resulting in poor model representation effect or object detection performance, and the spatial attention modules were often complex and expensive. In order to strike a balance between performance and complexity, this paper proposes a lightweight Mixed Local Channel Attention (MLCA) module to improve the performance of the object detection network, and it can simultaneously incorporate both channel information and spatial information, as well as local information and global information to improve the expression effect of the network. On this basis, the MobileNet-Attention-YOLO(MAY) algorithm for comparing the performance of various attention modules is presented. On the Pascal VOC and SMID datasets, MLCA achieves a better balance between model representation efficacy, performance, and complexity than alternative attention techniques. Against the Squeeze-and-Excitation(SE) attention mechanism on the PASCAL VOC dataset and the Coordinate Attention(CA) method on the SIMD dataset, the mAP is enhanced by 1.0 % and 1.5 %, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.