Abstract

Anomaly detection constitutes a fundamental step in developing self-aware autonomous agents capable of continuously learning from new situations, as it enables to distinguish novel experiences from already encountered ones. This paper combines Dynamic Bayesian Networks (DBNs) and Neural Networks (NNs) and proposes a method for detecting anomalies in video data at different abstraction levels. We use a Variational Autoencoder (VAE) to reduce the dimensionality of video frames, and Optical Flows between subsequent images, generating a latent space that captures both visual and dynamical information and that is comparable to low-dimensional sensory data (e.g., positioning, steering angle). An Adapted Markov Jump Particle Filter is employed to predict the following frames and detect anomalies in video data. Our method’s evaluation is executed using different video data from a semi-autonomous vehicle performing different tasks in a closed environment. Tests on benchmark anomaly detection datasets have additionally been conducted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call