Abstract

To help users seek desired music videos and create attractive music videos, many methods that realize applications such as music video recommendation, captioning and generation have been proposed. In this paper, a novel method that realizes these applications simultaneously on the basis of heterogeneous network analysis via latent link estimation is proposed. To the best of our knowledge, this work is the first attempt to realize music video recommendation, captioning and generation simultaneously. The proposed method enables latent link estimation with consideration of multimodal information and multiple social metadata obtained from music videos via Laplacian multiset canonical correlation analysis. Thus, it becomes feasible to construct a heterogeneous network that enables direct comparison of audio, visual and textual information of music videos and user information on the same feature space. Furthermore, link prediction on the obtained heterogeneous network enables association with (i) user information and their desired audio information; (ii) audio information and textual information that describes contents of musical pieces; and (iii) audio information and visual information that represents contents of musical pieces visually. As a result, support for (i) music video recommendation; (ii) captioning; and (iii) generation becomes feasible, respectively. Experimental results for a real-world dataset constructed by using YouTube-8M show the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call