Abstract

Video action recognition has been a challenging task over the years. The challenge herein is not only due to the complication in increasing information in videos but also the requirement of an efficient method to retain information over a longer-term where human action would take to perform. This paper proposes a novel framework, named as long-term video action recognition (LVAR) to perform generic action classification in the continuous video. The idea of LVAR is introducing a partial recurrence connection to propagate information within every layer of a spatial-temporal network, such as the well-known C3D. Empirically, we show that this addition allows the C3D network to access long-term information, and subsequently improves action recognition performance with videos of different length selected from both UCF101 and miniKinetics datasets. Further confirmation of our approach is strengthened with experiments on untrimmed video from the Thumos14 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call