Abstract

Semantic segmentation is a fundamental problem in multimedia which requires delicate per-pixel predictions of object categories. Recently, many researchers strive to refine the pixel-wise feature with spatial-contextual information. However, many of them still neglect the invisible hand of cross-channel information which provides inherent semantics to facilitate the segmentation performance. On the one hand, in the feature extraction stage, enhancing informative channels and suppressing trivial ones contribute to the acquisition of valuable semantic features, and thus improving the segmentation accuracy. On the other hand, in the prediction stage, we can predict the complete objects more clearly by finding the connections and complements between different channels, which can also contribute to the pixel prediction. And based on this idea, we propose a novel Channel-Adaptive Network for semantic segmentation, which is capable of enhancing the features from the perspective of channels in both feature extraction stage and prediction stage. Specifically, we propose two modules: (i) the Comprehensive Information Channel Attention (CiCA) module that addresses the shortcomings of existing channel attention by learning both low and high frequency components within each channel for emphasizing the informative channels; (ii) the Inter-Channel Relationship Reasoning (iCRR) module which is applied on the top of the feature extractor to adaptively enhance the interdependent channels by mining the complementary associations between them. Besides, our Channel-Adaptive Network is highly flexible, with a plug-and-play design. Extensive experiments have demonstrated that our method achieves the state-of-the-art segmentation performance on three challenging datasets, including Cityscapes (82.1%), ADE20K (46.51%) and PASCAL Context (55.0%).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.