The typical shortcoming of STM (Space-Time Memory Network) mode video object segmentation algorithms is their high segmentation performance coupled with slow processing speeds, which poses challenges in meeting real-world application demands. In this work, we propose using an online knowledge distillation method to develop a lightweight video segmentation algorithm based on the STM mode, achieving fast segmentation while maintaining performance. Specifically, we utilize a novel adaptive learning rate to tackle the issue of inverse learning during distillation. Subsequently, we introduce a Smooth Block mechanism to reduce the impact of structural disparities between the teacher and student models on distillation outcomes. Moreover, to reduce the fitting difficulty of the student model on single-frame features, we design the Space-Time Feature Fusion (STFF) module to provide appearance and position priors for the feature fitting process of each frame. Finally, we employ a simple Discriminator module for adversarial training with the student model, to encourage the student model to learn the feature distribution of the teacher model. Extensive experiments show that our algorithm attains performance comparable to the current state-of-the-art on both DAVIS and YouTube datasets, despite running up to ×4 faster, with ×20 fewer parameters and ×30 fewer GFLOPS.
Read full abstract