Abstract

We present a framework for attention-based video object detection using a simple yet effective external memory management algorithm. An attention mechanism has been adopted in video object detection task to enrich the features of key frames using adjacent frames. Although several recent studies utilized frame-level first-in-first-out (FIFO) memory to collect global video information, such a memory structure suffers from collection inefficiency, which results in low attention performance and high computational cost. To address this issue, we developed a novel scheme called diversity-aware feature aggregation (DAFA). Whereas other methods do not store sufficient feature information without expanding memory capacity, DAFA efficiently collects diverse features while avoiding redundancy using a simple Euclidean distance-based metric. Experimental results on the ImageNet VID dataset demonstrate that our lightweight model with global attention achieves 83.5 mAP on the ResNet-101 backbone, which exceeds the accuracy levels of most existing methods with a minimum runtime. Our method with global and local attention stages obtains 84.5 and 85.9 mAP on ResNet-101 and ResNeXt-101, respectively, thus achieving state-of-the-art performance without requiring additional post-processing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call