Abstract

Image matting has attracted growing interest in recent years for its wide applications in numerous vision tasks. Most previous image matting methods rely on trimaps as auxiliary input to define the foreground, background and unknown region. However, trimaps involve fussy manual annotation efforts and are expensive to be obtained in practice. Thus, it is hard and inflexible to update user's input or achieve real-time interaction with trimaps. Although some automatic matting approaches discard trimaps, they can only be applied to some certain scenarios, like human matting, which limits their versatility. In this work, we employ clicks as interactive behaviours for image matting, to indicate the user-defined foreground, background and unknown region, and propose a click-based deep interactive image matting (DIIM) approach. Compared with trimaps, clicks provide sparse information and are much easier and more flexible, especially for novice users. Based on clicks, users can perform interactive operations and gradually correct the errors until they are satisfied with the prediction. What's more, we propose a recurrent alpha feature propagation and a full-resolution extraction module to enhance the alpha matte estimation from high-level and low-level respectively. Experimental results show that the proposed click-based deep interactive image matting approach achieves promising performance on image matting datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call