Abstract

AbstractImage and video matting are still challenging problems in areas with low foreground‐background contrast. Video matting also has the challenge of ensuring temporally coherent mattes because the human visual system is highly sensitive to temporal jitter and flickering. On the other hand, video provides the opportunity to use information from other frames to improve the matte accuracy on a given frame. In this paper, we present a new video matting approach that improves the temporal coherence while maintaining high spatial accuracy in the computed mattes. We build sample sets of temporal and local samples that cover all the color distributions of the object and background over all previous frames. This helps guarantee spatial accuracy and temporal coherence by ensuring that proper samples are found even when distantly located in space or time. An explicit energy term encourages temporal consistency in the mattes derived from the selected samples. In addition, we use localized texture features to improve spatial accuracy in low contrast regions where color distributions overlap. The proposed method results in better spatial accuracy and temporal coherence than existing video matting methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call