Abstract

Video anomaly detection is a critical component of intelligent video surveillance systems, extensively deployed and researched in industry and academia. However, existing methods have a strong generalization ability for predicting anomaly samples. They cannot utilize high-level semantic and temporal contextual information in videos, resulting in unstable prediction performance. To alleviate this issue, we propose an encoder–decoder model named SMAMS, based on spatiotemporal masked autoencoder and memory modules. First, we represent and mask some of the video events using spatiotemporal cubes. Then, the unmasked patches are inputted into the spatiotemporal masked autoencoder to extract high-level semantic and spatiotemporal features of the video events. Next, we add multiple memory modules to store unmasked video patches of different feature layers. Finally, skip connections are introduced to compensate for crucial information loss caused by the memory modules. Experimental results show that the proposed method outperforms state-of-the-art methods, achieving AUC scores of 99.9%, 94.8%, and 78.9% on the UCSD Ped2, CUHK Avenue, and Shanghai Tech datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call