Downloads provided by UsageCounts
EmokineDataset Companion resources Paper Christensen, Julia F. and Fernandez, Andres and Smith, Rebecca and Michalareas, Georgios and Yazdi, Sina H. N. and Farahi, Fahima and Schmidt, Eva-Madeleine and Bahmanian, Nasimeh and Roig, Gemma (2024): "EMOKINE: A Software Package and Computational Framework for Scaling Up the Creation of Highly Controlled Emotional Full-Body Movement Datasets". Code https://github.com/andres-fr/emokine EmokineDataset is a pilot dataset showcasing the usefulness of the emokine software library. It featuers a single dancer performing 63 short sequences, which have been recorded and analyzed in different ways. This pilot dataset is organized in 3 folders: Stimuli: The sequences are presented in 4 visual presentations that can be used as stimulus in observer experiments: Silhouette: Videos with a white silhouette of the dancer on black background. FLD (Full-Light Display): video recordings with the performer's face blurred out. PLD (Point-Light Display): videos featuring a black background with white circles corresponding to the selected body landmarks. Avatar: Videos produced by the XSENS motion capture propietary software, featuring a robot-like avatar performing the captured movements on a light blue background. Data: In order to facilitate computation and analysis of the stimuli, this pilot dataset also includes several data formats: MVNX: Raw motion capture data directly recorded from the XSENS motion capture system. CSV: Translation of a subset of the MVNX sequences into CSV, included for easier integration with mainstream analysis software tools). The subset includes the following features: acceleration, angularAcceleration, angularVelocity, centerOfMass, footContacts, orientation, position and velocity. CamPos: While the MVNX provides 3D positions with respect to a global frame of reference, the CamPos [JSON](https://www.json.org/json-en.html) files represent the position from the perspective of the camera used to render the PLD videos. Specifically, their 3D positions are given with respect to the camera as (x, y, z), where (x, y) go from (0, 0) (left, bottom) to (1, 1) (right, top), and z is the distance between the camera and the point in meters. It can be useful to get a 2-dimensional projection of the dancer position (simply ignore z). Kinematic: Analysis of a selection of relevant kinematic features, using information from MVNX, Silhouette and CamPos, provided in tabular form. Validation: Data and experiments reported in our paper as part of the data validation, to support its meaningfulness and usefulness for downstream tasks. TechVal: A collection of plots presenting relevant statistics of the pilot dataset. ObserverExperiment: Results in tabular form of an online study conducted with human participants, tasked to recognize emotions of the stimuli and rate their beauty. More specifically, the 63 unique sequences are divided into 9 unique choreographies, each one being performed once as an explanation, and then 6 times with different intended emotions (angry, content, fearful, joy, neutral and sad). Once downloaded, the pilot dataset should have the following structure: EmokineDataset├── Stimuli│ ├── Avatar│ ├── FLD│ ├── PLD│ └── Silhouette├── Data│ ├── CamPos│ ├── CSV│ ├── Kinematic│ ├── MVNX│ └── TechVal└── Validation ├── TechVal └── ObserverExperiment Where each of the stimuli, MVNX, CamPos and Kinematic have this structure: ├── explanation│ ├── _seq1_explanation.│ ├── ...│ └── _seq9_explanation.├── _seq1_angry.├── _seq1_content.├── _seq1_fearful.├── _seq1_joy.├── _seq1_neutral.├── _seq1_sad....└── _seq9_sad. The CSV directory is slightly different, because instead of a single file for each seq and emotion, it features a folder containing a .csv file for each one of the 8 features being extracted (acceleration, velocity...).
Funded by the Max Planck Society, Germany. Under review.
Emotion, Dance, Open Science, Motion Capture, Computer Vision, Aesthetics, Affective Neuroscience, Dataset
Emotion, Dance, Open Science, Motion Capture, Computer Vision, Aesthetics, Affective Neuroscience, Dataset
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 32 | |
| downloads | 2 |

Views provided by UsageCounts
Downloads provided by UsageCounts