Abstract

As one of the vital topics in intelligent surveillance, weakly supervised online video anomaly detection (WS-OVAD) aims to identify the ongoing anomalous events moment-to-moment in streaming videos, trained with only video-level annotations. Previous studies tended to utilize a unified single-stage framework, which struggled to simultaneously address the issues of online constraints and weakly supervised settings. To solve this dilemma, in this paper, we propose a two-stage-based framework, namely “decouple and resolve” (DAR), which consists of two modules, i.e., temporal proposal producer (TPP) and online anomaly localizer (OAL). With the supervision of video-level binary labels, the TPP module targets fully exploiting hierarchical temporal relations among snippets for generating precise snippet-level pseudo-labels. Then, given fine-grained supervisory signals produced by TPP, the Transformer-based OAL module is trained to aggregate both the useful cues retrieved from historical observations and anticipated future semantics, for making predictions at the current time step. Both the TPP and OAL modules are jointly trained to share the beneficial knowledge in a multi-task learning paradigm. Extensive experimental results on three public data sets validate the superior performance of the proposed DAR framework over the competing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call