Abstract

Abnormal event detection in videos plays an essential role for public security. However, most weakly supervised learning methods ignore the relationship between the complicated spatial correlations and the dynamical trends of temporal pattern in video data. In this paper, we provide a new perspective, i.e., spatial similarity and temporal consistency are adopted to construct Spatio-Temporal Graph-based CNNs (STGCNs). For the feature extraction, we use Inflated 3D (I3D) convolutional networks to extract features which can better capture appearance and motion dynamics in videos. For the spatio graph and temporal graph, each video segment is regarded as a vertex in the graph, and attention mechanism is introduced to allocate attention for each segment. For the spatial-temporal fusion graph, we propose a self-adapting weighting to fuse them. Finally, we build ranking loss and classification loss to improve the robustness of STGCNs. We evaluate the performance of STGCNs on UCF-Crime datasets (total 128 h) and ShanghaiTech datasets (total 317,398 frames) with the AUC score 84.2% and 92.3%, respectively. The experimental results also show the effectiveness and robustness with other evaluation metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call