Abstract
It is hard to monitor human activities in various contexts like security surveillance, healthcare, and human-computer interaction. Human Activity Recognition is the process of predicting what a person is doing based on the traces of their movement. We propose using deep recurrent neural networks (DRNNs) for building recognition models capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM), DRNNs and evaluate their effectiveness on various benchmark datasets. Long shortterm memory (LSTM) is an artificial deep recurrent neural network (DRNN) architecture used in deep learning, especially for time series prediction. It can process single data points (such as images) and entire data sequences (such as speech or video). LSTM networks are well-suited for classifying, processing, and making predictions based on time series data since there can be lags of unknown duration between essential events in a time series in real-time. We proposed Human Activity Recognition (HAR) using a smartphone dataset and LSTM. Compared to a classical approach, using Deep Recurrent Neural Networks (DRNN) with Long Short-Term Memory cells (LSTMs) require no or almost no feature engineering. Data can be fed directly into the neural network, which acts as a black box, modeling the problem correctly. This means that the neural networks are almost always able to identify the movement type correctly. We used jupyter notebook with python 3.7+.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.