Abstract

Unmanned Aerial Vehicle )UAV( finds its significant application in video surveillance due to its low cost, high portability and fast-mobility. In this paper, the proposed approach focuses on recognizing the human activity in aerial video sequences through various keypoints detected on the human body via OpenPose. The detected keypoints are passed onto machine learning and deep learning classifiers for classifying the human actions. Experimental results demonstrate that multilayer perceptron and SVM outperformed all the other classifiers by reporting an accuracy of 87.80% and 87.77% respectively whereas LSTM did not produce very good results as compared to other classifiers. Stacked Long Short-Term Memory networks (LSTM( produced an accuracy of 71.30% and Bidirectional LSTM yielded an accuracy of 76.04%. The results also indicate that machine learning models performed better than deep learning models. The major reason for this finding is the lesser availability of data and the deep learning models being data hungry models require a large amount of data to work upon. The paper also analyses the failure cases of OpenPose by testing the system on aerial videos captured by a drone flying at a higher altitude. This work provides a baseline for validating machine learning classifiers and deep learning classifiers against recognition of human action from aerial videos.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call