Abstract

The task of instance segmentation is widely acknowledged as being one of the most formidable challenges in the field of computer vision. Current methods have low utilization of boundary information, especially in dense scenes with occlusion and complex shapes of object instances, the boundary information may become ineffective. This results in coarse object boundary masks that fail to cover the entire object. To address this challenge, we are introducing a novel method called boundary-guided global feature fusion (BGF) which is based on the Mask R-CNN network. We designed a boundary branch that includes a Boundary Feature Extractor (BFE) module to extract object boundary features at different stages. Additionally, we constructed a binary image dataset containing instance boundaries for training the boundary branch. We also trained the boundary branch separately using a dedicated dataset before training the entire network. We then input the Mask R-CNN features and boundary features into a feature fusion module where the boundary features provide shape information needed for detection and segmentation. Finally, we use a global attention module (GAM) to further fuse features. Through extensive experiments, we demonstrate that our approach outperforms state-of-the-art instance segmentation algorithms, producing finer and more complete instance masks while also improving model capability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call