
MOSA dataset is a large-scale music dataset containing 742 professional piano and violin solo music performances with 23 musicians (> 30 hours, and > 570 K notes). This dataset features following types of data: High-quality 3-D motion capture data Audio recordings Manual semantic annotations This is the dataset of the paper: Huang et al. (2024) MOSA: Music Motion with Semantic Annotation Dataset for Multimedia Anaysis and Generation. IEEE/ACM Transactions on Audio, Speech and Language Processing. DOI: 10.1109/TASLP.2024.3407529https://arxiv.org/abs/2406.06375 The description of dataset is avaiable on Github: https://github.com/yufenhuang/MOSA-Music-mOtion-and-Semantic-Annotation-dataset/blob/main/MOSA-dataset/dataset.md To request the access of full dataset, please sign in with Zenodo and submit the request from.
3D motion capture, multimedia, annotation, semantic, music
3D motion capture, multimedia, annotation, semantic, music
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
