Abstract

Capturing the comprehensive information of various sizes and shapes of images in the same convolution layer is typically a challenging task in computer vision. There are two main kinds of methods for capturing those features. The first uses the inception structure and its variants. The second utilizes larger convolution kernels on specific layers or stacks with more convolution blocks. However, these methods can result in computationally intensive or vanishing gradients. In this paper, to accommodate feature distributions with different sizes, shapes and reduce computational cost, we propose a width- and depth-aware module named the WD-module to match feature distributions. Moreover, the proposed WD-module consumes less computational cost and parameters compared with traditional residual convolution layers. To verify the effectiveness of our proposed method, a size- and shape-aware backbone network named S2A-Net was built, which was obtained by stacking the WD-modules. By visualizing heat maps and features, the proposed S2A-Net can adapt to objects with different sizes and shapes in visual recognition tasks and learn more comprehensive characteristics. Experimental results show that the proposed method has higher accuracy in image recognition and outperforms other state-of-the-art networks with the same numbers of layers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.