Abstract
AbstractIn recent years, abnormal event detection in video surveillance has become a very important task mainly treated by deep learning methods taken into account many challenges. However, these methods still not trained on an anomaly detection based objective which proves their ineffectiveness in such a problem. In this paper, we propose an unsupervised method based on a new architecture for deep one class of convolutional auto-encoders (CAEs) for representing a compact Spatio-temporal feature for anomaly detection. Our CAEs are constructed by added deconvolutions layers to the CNN VGG 16. Then, we train our CAEs for a one-class training objective by fine-tuning our model to properly exploit the richness of the dataset with which CNN was trained. The first CAE is trained on the original frames to extract a good descriptor of shapes and the second CAE is learned using optical flow representations to provide a strength description of motion between frames. For this purpose, we define two loss functions, compactness loss and representativeness loss for training our CAEs architectures not only to maximize the inter-classes distance and to minimize the intra-class distance but also to ensure the tightness and the representativeness of features of normal images. We reduce features dimensions by applying a PCA (Principal Component Analyser) to combine our two descriptors with a Gaussian classifier for abnormal Spatio-temporal events detection. Our method has a high performance in terms of reliability and accuracy. It achieved abnormal event detection with good efficiency in challenging datasets compared to state-of-the-art methods.KeywordsDeep LearningAnomaly detectionConvolutional Auto-Encoder
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.