Abstract

AbstractInference using skeleton to steer RGB videos is applicable to fine‐grained activities in indoor human action recognition (IHAR). However, existing methods that explore only spatial alignment are prone to bias, resulting in limited performance. The authors propose a Three‐stage Guidance (3sG) framework, leveraging skeleton knowledge to promote RGB in three stages. First, a soft shading image is proposed for alleviating background noise in videos, allowing the network to directly focus more on the motion region. Second, the authors propose to extract RGB frames of interest to reduce the computational effort. Furthermore, to explore more fully the complementary information between skeletons and RGB, the skeleton is coupled to the frame representation in a different spatial–temporal sharing pattern. Third, the global skeleton and skeleton‐guided RGB features are fed into the shared classifiers, which approximate the logit distributions of the two to enhance the performance in RGB unimodal. Finally, a fusion strategy that utilizes two learnable parameters to adaptively integrate the skeleton with the RGB is proposed. 3sG outperforms the state‐of‐the‐art results on the Toyota Smarthome dataset while it is more efficient than similar methods on the NTU RGB+D dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call