Abstract

Recent years have witnessed rapid progress in employing graph convolutional networks (GCNs) for various video analysis tasks where graph-based data abound. However, exploring the transferable knowledge between different graphs, which is a direction with wide and potential applications, has been rarely studied. To address this issue, we propose a graph interaction networks (GINs) model for transferring relation knowledge across two graphs. Different from conventional domain adaptation or knowledge distillation approaches, our GINs focus on a “self-learned” weight matrix, which is a higher-level representation of the input data. And each element of the weight matrix represents the pair-wise relation among different nodes within the graph. Moreover, we guide the networks to transfer the knowledge across the weight matrices by designing a task-specific loss function, so that the relation information is well preserved during transfer. We conduct experiments on two different scenarios for video analysis, including a new proposed setting for unsupervised skeleton-based action recognition across different datasets, and supervised group activity recognition with multi-modal inputs. Extensive experiments on six widely used datasets illustrate that our GINs achieve very competitive performance in comparison with the state-of-the-arts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.