Abstract

Video anomaly detection and localization is still a challenging task in the computer vision field. Previous methods took this task as an outlier detection problem, which computed the deviation between the test samples and the normal patterns. In this paper, an adaptive intra-frame classification network (AICN) is proposed to transform this task to a multi-class classification problem. The contributions of our method are as follows. AICN is an end-to-end network for anomaly detection and localization. By using the motion convolutional layers and the shape convolutional layers, spatial-temporal features are extracted without resizing or splitting the frames before forward propagation. AICN enhances the adaptiveness of model. By using the adaptive region pooling layer and the intra-frame classifier, AICN is adaptive to frames with different resolutions and is easier to be applied on other scenes. AICN evaluates the abnormality of frames based on the intra-frame classification results. The intra-frame classification strategy reserves more connection information of sub-regions and makes the model outperform previous methods. The proposed method is examined on four public datasets with different background complexities and resolutions: UCSD Ped1 dataset, UCSD Ped2 dataset, Avenue dataset and Subway dataset. The results are further compared with previous approaches to confirm the effectiveness and the advantage of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.