Abstract

Human activity anomaly detection plays a crucial role in the next generation of surveillance and assisted living systems. Most anomaly detection algorithms are generative models and learn features from raw images. This work shows that popular state-of-the-art autoencoder-based anomaly detection systems are not capable of effectively detecting human-posture and object-positions related anomalies. Therefore, a human pose-driven and object-detector-based deep learning architecture is proposed, which simultaneously leverages human poses and raw RGB data to perform human activity anomaly detection. It is demonstrated that pose-driven learning overcomes the raw RGB based counterpart limitations in different human activities classification. Extensive validation is provided by using popular datasets. Then, it is demonstrated that with the aid of object detection, the human activities classification can be effectively used in human activity anomaly detection. Moreover, novel challenging datasets, that is, BMbD, M-BMbD and JBMOPbD, are proposed for single and multi-target human posture anomaly detection and joint human posture and object position anomaly detection evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call