Abstract
Squeeze-and-Excitation (SE) Networks won the last ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) classification competition and is very popular in today’s vision community. The SE block is the core of Squeeze-and-Excitation Network (SENet), which adaptively recalibrates channel-wise features and suppresses less useful ones. Since SE blocks can be directly used in existing models and effectively improve performance, SE blocks are widely used in a variety of tasks. In this paper, we propose a novel Parametric Sigmoid (PSigmoid) to enhance the SE block. We named the new module PSigmoid SE (PSE) block. The PSE block can not only suppress features in a channel-wise manner, but also enhance features. We evaluate the performance of our method on four common datasets including CIFAR-10, CIFAR-100, SVHN and Tiny ImageNet. Experimental results show the effectiveness of our method. In addition, we compare the differences between the PSE block and the SE block through a detailed analysis of the configuration. Finally, we use a combination of PSE block and SE block to obtain better performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.