Abstract
Video anomaly detection is of great significance due to its wide applications in video surveillance. Recently, there is a trend of using a video prediction framework to tackle this problem. This kind of methods detects anomalies according to the difference between a predicted frame and its ground truth. However, existing prediction methods lack the consideration of multi-scale temporal constraints on generating future frames. Therefore, based on the prediction framework, this paper proposes a temporal enhanced anomaly detection approach, which designs a generative adversarial network with dual discriminator (frame discriminator and sequence discriminator) to predict future frames. In order to obtain more realistic predictions, other than commonly used spatial constraints and the adversarial penalty from the frame discriminator, we also consider both short-range and long-range motions to impose constraints. Specifically, for short-range motion modeling, we utilize the optical flow loss to ensure temporal continuity over two adjacent frames, while for long-range motion modeling, we design a sequence discriminator to identify fake contained sequences from real sequences, making frames more consistent with their previous consecutive frames. Experiments on three datasets, UCSD Ped1, UCSD Ped2 and Avenue, demonstrate the effectiveness of our method in terms of various evaluation criteria for video anomaly detection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.