Abstract

In this chapter, the authors aim to study only vision RGB-based action and activity recognition methods in which long short-term memory (LSTM) has been deployed for sequence learning. They explore sequence learning concepts integrated with LSTM for action and activity recognition. The authors describe the concept of sequence learning for action and activity recognition using LSTM and its variants such as multi-layer or deep LSTM and bidirectional LSTM networks. They explain the working of traditional recurrent neural networks (RNNs) and their limitations, along with why LSTM is better than RNN. RNNs are a type of artificial neural network that has turned out to be more popular in recent years because of its power to learn time series data. The RNN is a special network that consists of recurrent connections, unlike traditional feedforward neural networks. The vanishing gradient problem can be solved by utilizing a special type of RNN known as LSTM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call