Abstract

Video object detection is a challenging task due to the appearance deterioration in video frames. To enhance feature representation of the deteriorated frames, previous methods usually aggregate features from fixed-density and fixed-length adjacent frames. However, due to the redundancy of videos and irregular object movements over time, temporal information may not be efficiently exploited using the traditional inflexible strategy. Alternatively, we present a temporal-adaptive sparse feature aggregation framework, an accurate and efficient method for video object detection. Instead of adopting a fixed-density and fixed-length window fusion strategy, a temporal-adaptive sparse sampling strategy is proposed using a stride predictor to encode informative frames more efficiently. A collaborative feature aggregation framework, which consists of a pixel-adaptive aggregation module and an object-relational aggregation module, is proposed for feature enhancement. The pixel-adaptive aggregation module enhances pixel-level features on the current frame using corresponding pixel-level features from other frames. Similarly, the object-relational aggregation module further enhances feature representation at proposal level. A graph is constructed to model the relations between different proposals so that the relation features and proposal features are adaptively fused for feature enhancement. Experiments demonstrate that our proposed framework significantly surpasses traditional dense aggregation methods, and comprehensive ablation studies verify the effectiveness of each proposed module in our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call