Abstract

Although semantic segmentation based on Deep Convolutional Neural Networks (DCNN) has made great signs of progress, the issue that features generated by deep models are of low-resolution and will negatively affect the final semantic segmentation performances is not fully addressed yet. In this paper, we propose to adaptively combine high-level and low-level features of DCNN to improve the quality of the features used for semantic segmentation. To this end, we design a feature interweaving neural network module to fuse features from different layers of DCNN to effectively take advantage of their complementary properties. And, in order to enhance complementarity and diminish contradiction of the features for better feature fusion, we propose a feature modulation neural network module to modulate the features before interweaving. Furthermore, global information of images is summarized and used to augment the features for providing guidance for feature interweaving. The proposed method is extensively evaluated and compared to state-of-the-art methods based on two benchmark semantic segmentation datasets Cityscapes and PASCAL VOC 2012 in the experiments. Obtained results demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call