Abstract

Single image matting, the task of estimating accurate foreground opacity from a given image, is a severely ill-posed and challenging problem. Inspired by recent advances in image co-segmentation, in this paper, we present a novel framework for a new task called co-matting, which aims to simultaneously extract alpha mattes in multiple images that contain slightly deformed instances of the same foreground object against different backgrounds. Our system first generates trimaps for input images using co-segmentation, and an initial alpha matte for each image using single image matting. Each alpha matte is then locally evaluated using a novel matting confidence metric learned from a training dataset. In the co-matting step, we first align the foreground object instances using appearance and geometric features, then apply a global optimization on all input images to jointly improve their alpha mattes, which allows high confidence local regions to guide their corresponding low confidence ones in other images to achieve more accurate mattes all together. Experimental results show that this co-matting framework can achieve noticeably higher quality results on an image stack than applying state-of-the-art single image matting techniques individually on each image.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.