Abstract

Deep learning techniques have recently demonstrated their ability to be applied in any field, including image processing, natural language processing, speech recognition, and many other real-world problem-solving applications. Human activity recognition (HAR), on the other hand, has become a popular research topic due to its wide range of applications. The researchers began working on the new ideas by combining the two emerging areas to solve HAR problems using deep learning. Recurrent neural networks (RNNs) in deep learning (DL) provide higher opportunity in recognizing the abnormal behavior of humans to avoid any kind of security issues. The present study proposed a deep network architecture based on one of the techniques of deep learning named as residual bidirectional long-term memory (LSTM). The new network is capable of avoiding gradient vanishing in both temporal and spatial dimensions with a view to increase the rate of recognition. To understand the complexity of activities recognition and classification, two LSTM models, basic model and the proposed model, were used. Later, a comparative analysis is performed to understand the efficiencies of the models during the classification of five human activities like abuse, arrest, arson, assault, and fighting images classification. The basic LSTM model has achieved a training accuracy of just 18% and testing accuracy of 21% with higher training and classification loss values. But the proposed LSTM model has outperformed the basic model while achieving 100% classification accuracy. Finally, the observations have proved that the proposed LSTM model is best suitable in recognizing and classifying the human activities well even for real-time videos.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.