Abstract

The foreground extraction of video is derived from frame difference and background subtraction. These methods have always relied on the temporal consistency of successive video frames. If the video consistency is suddenly destroyed, such as an object being occluded, some frames are lost, or the camera is shaken, etc., the results of foreground extraction may be significantly degraded. In this paper, our extends One-Shot Video Object Segmentation by adding two branchs for Positioning foreground targets and propagating foreground semantic information. These two branches are parallel to the existing foreground segmentation branches, which facilitates efficient iterative and fine-tuning of the foreground segmentation branches. This foreground extraction method is different from the previous background subtraction and frame difference methods. Our method does not rely on the time information and only needs to give the manual annotation of one frame, which can extract all the specific foreground in the video sequence. Then, the coloring can get complete foreground information for the separated target. Experiments show that the foreground extraction algorithm applied in this paper has better robustness in dynamic background, camera shake, intermittent motion of objects, low contrast, and so on.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call