Abstract

Multi-modal attention mechanisms have been successfully used in multi-modal graph learning for various tasks. However, existing attention-based multi-modal graph learning (AMGL) architectures heavily rely on manual design, requiring huge effort and expert experience. Meanwhile, graph neural architecture search (GNAS) has made great progress toward automatically designing graph-based learning architectures. However, it is challenging to directly adopt existing GNAS methods to search for better AMGL architectures because of the search spaces that only focus on designing graph neural network architectures and the search objective that ignores multi-modal interactive information between modalities and long-term content dependencies within different modalities. To address these issues, we propose an automated attention-based multi-modal graph learning architecture search (AutoAMS) framework, which can automatically design the optimal AMGL architectures for different multi-modal tasks. Specifically, we design an effective attention-based multi-modal (AM) search space consisting of four sub-spaces, which can jointly support the automatic search of multi-modal attention representation and other components of multi-modal graph learning architecture. In addition, a novel search objective based on an unsupervised multi-modal reconstruction loss and task-specific loss is introduced to search and train AMGL architectures. The search objective can extract the global features and capture multi-modal interactions from multiple modalities. The experimental results on multi-modal tasks show strong evidence that AutoAMS is capable of designing high-performance AMGL architectures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.