Abstract

The typical shortcoming of STM (Space-Time Memory Network) mode video object segmentation algorithms is their high segmentation performance coupled with slow processing speeds, which poses challenges in meeting real-world application demands. In this work, we propose using an online knowledge distillation method to develop a lightweight video segmentation algorithm based on the STM mode, achieving fast segmentation while maintaining performance. Specifically, we utilize a novel adaptive learning rate to tackle the issue of inverse learning during distillation. Subsequently, we introduce a Smooth Block mechanism to reduce the impact of structural disparities between the teacher and student models on distillation outcomes. Moreover, to reduce the fitting difficulty of the student model on single-frame features, we design the Space-Time Feature Fusion (STFF) module to provide appearance and position priors for the feature fitting process of each frame. Finally, we employ a simple Discriminator module for adversarial training with the student model, to encourage the student model to learn the feature distribution of the teacher model. Extensive experiments show that our algorithm attains performance comparable to the current state-of-the-art on both DAVIS and YouTube datasets, despite running up to ×4 faster, with ×20 fewer parameters and ×30 fewer GFLOPS.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.