Abstract
Despite recent progress on semantic segmentation, there still exist huge challenges in high or ultra-high resolution images semantic segmentation. Although the latest collaborative global-local semantic segmentation methods such as GLNet [4] and PPN [18] have achieved impressive results, they are inefficient and not fit for practical applications. Thus, in this paper, we propose a novel and efficient collaborative global-local framework on the basis of PPN named Faster-PPN for high or ultra-high resolution images semantic segmentation which makes a better trade-off between the efficient and effectiveness towards the real-time speed. Specially, we propose Dual Mutual Learning to improve the feature representation of global and local branches, which conducts knowledge distillation mutually between the global and local branches. Furthermore, we design the Pixel Proposal Fusion Module to conduct the fine-grained selection mechanism which further reduces the redundant pixels for fusion resulting in the improvement of inference speed. The experimental results on three challenging high or ultra-high resolution datasets DeepGlobe, ISIC and BACH demonstrate that Faster-PPN achieves the best performance on accuracy, inference speed and memory usage compared with state-of-the-art approaches. Especially, our method achieves real-time and near real-time speed with 36 FPS and 17.7 FPS on ISIC and DeepGlobe, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.