Abstract

Capturing the interactions of human articulations lies in the center of skeleton-based action recognition. Recent graph-based methods are inherently limited in the weak spatial context modeling capability due to fixed interaction pattern and inflexible shared weights of GCN. To address above problems, we propose the multi-view interactional graph network (MV-IGNet) which can construct, learn and infer multi-level spatial skeleton context, including view-level (global), group-level, joint-level (local) context, in a unified way. MV-IGNet leverages different skeleton topologies as multi-views to cooperatively generate complementary action features. For each view, separable parametric graph convolution (SPG-Conv) enables multiple parameterized graphs to enrich local interaction patterns, which provides strong graph-adaption ability to handle irregular skeleton topologies. We also partition the skeleton into several groups and then the higher-level group contexts including inter-group and intra-group, are hierarchically captured by above SPG-Conv layers. A simple yet effective global context adaption (GCA) module facilitates representative feature extraction by learning the input-dependent skeleton topologies. Compared to the mainstream works, MV-IGNet can be readily implemented while with smaller model size and faster inference. Experimental results show the proposed MV-IGNet achieves impressive performance on large-scale benchmarks: NTU-RGB+D and NTU-RGB+D 120.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.