Abstract

Detecting anomalous activity in video surveillance often suffers from limited availability of training data. Transfer learning may close this gap, allowing to use existing annotated data from some source domain. However, analyzing the source feature space in terms of its potential for transfer of learning to another context is still to be investigated. This paper reports a study on video anomaly detection, focusing on the analysis of feature embeddings of pre-trained CNNs with the use of novel cross-domain generalization measures that allow to study how source features generalize for different target video domains. This generalization analysis represents not only a theoretical approach, can be useful in practice as a path to understand which datasets allow better transfer of knowledge. Our results confirm this, achieving better anomaly detectors for video frames and allowing analysis of transfer learning’s positive and negative aspects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call