Abstract

In the era of remote sensing (RS) big data, in order to alleviate the time cost of acquiring RS images, recommending RS images that meet users’ individual needs continues to be an urgent technology. However, the technology accomplished to date has two main problems: (1) they rely on the users’ queries and thus lack initiative and cannot tap the users’ potential interests and (2) they restrict the users’ preferences to temporal and/or spatial information while ignoring other attributes and are not compatible with visual information. In an effort to fully explore the features of RS images and thereby achieve accurate active recommendations, in this paper we propose a new Multi-modal Knowledge graph-aware Deep Graph Attention Network (MMKDGAT) which we built upon graph convolutional networks. Specifically, we first constructed a multi-modal knowledge graph (MMKG) for RS images to integrate their various attributes as well as visual information, and then we conduct deep relational attention-based information aggregation to enrich the node representations with multi-modal information and higher-order collaborative signals. Our extensive experiments on two simulated RS image recommendation datasets demonstrated that our MMKDGAT achieved noticeable improvement over several state-of-the-art methods in so far as active recommendation accuracy and cold-start recommendation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.