
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
handle: 10230/47323
Modeling various aspects that make a music piece unique is a challenging task, requiring the combination of multiple sources of information. Deep learning is commonly used to obtain representations using various sources of information, such as the audio, interactions between users and songs, or associated genre metadata. Recently, contrastive learning has led to representations that generalize better compared to traditional supervised methods. In this paper, we present a novel approach that combines multiple types of information related to music using cross-modal contrastive learning, allowing us to learn an audio feature from heterogeneous data simultaneously. We align the latent representations obtained from playlists-track interactions, genre metadata, and the tracks' audio, by maximizing the agreement between these modality representations using a contrastive loss. We evaluate our approach in three tasks, namely, genre classification, playlist continuation and automatic tagging. We compare the performances with a baseline audio-based CNN trained to predict these modalities. We also study the importance of including multiple sources of information when training our embedding model. The results suggest that the proposed method outperforms the baseline in all the three downstream tasks and achieves comparable performance to the state-of-the-art.
Accepted for publication to IEEE Signal Processing Letters
FOS: Computer and information sciences, Sound (cs.SD), Music information retrieval, music information retrieval, Computer Science - Sound, Computer Science - Information Retrieval, Audio and Speech Processing (eess.AS), Mood, Machine learning, Recommender systems, FOS: Electrical engineering, electronic engineering, information engineering, Training, Multiple signal classification, Metadata, acoustic signal processing, Multimedia (cs.MM), machine learning, Task analysis, recommender systems, Music, Computer Science - Multimedia, Information Retrieval (cs.IR), Acoustic signal processing, Electrical Engineering and Systems Science - Audio and Speech Processing
FOS: Computer and information sciences, Sound (cs.SD), Music information retrieval, music information retrieval, Computer Science - Sound, Computer Science - Information Retrieval, Audio and Speech Processing (eess.AS), Mood, Machine learning, Recommender systems, FOS: Electrical engineering, electronic engineering, information engineering, Training, Multiple signal classification, Metadata, acoustic signal processing, Multimedia (cs.MM), machine learning, Task analysis, recommender systems, Music, Computer Science - Multimedia, Information Retrieval (cs.IR), Acoustic signal processing, Electrical Engineering and Systems Science - Audio and Speech Processing
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 12 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
views | 2 | |
downloads | 1 |