Downloads provided by UsageCounts
Note: This dataset is a work in progress and will be continuously updated as the full study is completed. Introduction: Typically, music emotion research is conducted with participants who share similar characteristics, or it relies on retrospective summative judgments regarding the perceived emotions of music. However, there is a lack of emotion-annotated data collected from multiple participants (N>10) assessing audio material in real-time. This dataset addresses this gap, featuring a substantial number of participants and a diverse selection of piano music performance excerpts. Dataset Details: Participants: A total of 128 participants, representing diverse demographics including first languages, genders, and levels of musical instrument-playing experience. Audio Material:The dataset comprises 51 1-minute unique international award winning piano performances from the Western canon spanning various musical eras, with a specific focus on perceived emotion throughout each piece's duration. Annotation Platform: Participants used a web-based platform developed for this study, enabling time-varying emotion annotations and survey completion. The platform offers universal access via standard web browsers. Annotation Method: The platform employs a Valence-Arousal (VA) model and provides guide emotion tags to facilitate emotion rating throughout the audio excerpts. Emotion Ratings: A total of 133,477 emotion VA ratings were collected from all 128 participants across all 51 clips over time. On average, there are 20.5 emotion VA ratings per one-minute clip per participant, with a standard deviation of 29.3. File Summaries: Raw_ratingspoints_nodup_128p_51samples.csv: Contains all VA points collected from the 51 audio samples annotated by 128 participants. Cleaned_ratingspoints_nodup_128p_51samples.csv: Presents updated VA rating points for the 51 music samples from 128 participants, with the most recent ratings retained. subscale_score_sum.csv: Provides background survey scores (18 distinct measures) for each of the 128 participants. 51samples_url_meta.csv: Includes metadata about the 51 music clips chosen from the MEASTRO dataset for this study. clean-up-function-220608.ipynb: A Jupyter notebook detailing how to retain only the most recent ratings for participants who may rewind and re-rate clips or provide multiple ratings on duplicate audio locations. It includes a comprehensive explanation of the updating process. test-nodup.csv: A mock sample of VA ratings used to test the feasibility of the clean-up function.
music emotion
music emotion
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 9 | |
| downloads | 5 |

Views provided by UsageCounts
Downloads provided by UsageCounts