Abstract

In this work, we address the problem of video object segmentation (VOS), namely segmenting specific objects throughout a video sequence when given only an annotated first frame. Previous VOS methods based on deep neural networks often solves this problem by fine-tuning the segmentation model in the first frame of the test video sequence, which is time-consuming and can not be well adapted to the current target video. In this paper, we proposed the adaptive online learning for video object segmentation (AOL-VOS), which adaptively optimizes the network parameters and hyperparameters of segmentation model for better predicting the segmentation results. Specifically, we first pre-train the segmentation model with the static video frames and then learn the effective adaptation strategy on the training set by optimizing both network parameters and hyperparameters. In the testing process, we learn how to online adapt the learned segmentation model to the specific testing video sequence and the corresponding future video frames, where the confidence patterns is employed to constrain/guide the implementation of adaptive learning process by fusing both object appearance and motion cue information. Comprehensive evaluations on Davis 16 and SegTrack V2 datasets well demonstrate the significant superiority of our proposed AOL-VOS over other state-of-the-arts for video object segmentation task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call