Abstract

Existing automatic matting methods tend to directly obtain the alpha mattes from the RGB image using semantic segmentation networks, and relying solely on segmentation to achieve high-quality estimation is usually unrealistic. To address this issue, we propose a multi-guided-based image matting (MGBMatting) model that utilizes boundary information and semantic features as comprehensive and sufficient guidance, increasing attention to the unknown regions in the trimap, which is often a challenging aspect of matting tasks. The boundary-extracted module we introduced in MGBMatting effectively enhances the boundary features of the foreground. In addition, the boundary optimization module effectively achieves spatial consistency in features as well as enhances the representational power of the features. Further, the dual-stream encoder increases the possibility of capturing both local features and long-range feature dependencies. We use two widely used image matting datasets, Composition-1k and Distinctions-646, to evaluate our MGBMatting. Extensive experimentation demonstrates that the proposed MGBMatting yields significant performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call