Abstract

Learning how to represent data represented by features obtained from multiple modalities through representation learning strategies has received much attention in Music Information Retrieval. Among several sources of information, musical data can be represented mainly by features extracted from acoustic content, lyrics, and metadata that concentrate complementary information and have relevance when discriminating the recordings. In this work, we propose a new method for learning multimodal representations structured as a heterogeneous network capable of incorporating different musical features in constructing a representation and exploring the similarity simultaneously. Our multimodal representation is centered on the information of tags extracted from a state-of-the-art neural language model and, in a complementary way, the audio represented by the melspectrogram. We submitted our method to a robust evaluation process composed of 10,000 queries with different scenarios and model parameter variations. Besides, we compute the Mean Average Precision and compare the representation proposed to representations built only with audio or tags obtained from a pre-trained neural model. The proposed method achieves the best results in all evaluated scenarios and emphasizes the discriminative power of multimodality can add to musical representations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call