Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2023
License: CC BY
Data sources: ZENODO
ZENODO
Dataset . 2023
License: CC BY
Data sources: Datacite
ZENODO
Dataset . 2023
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

EmokineDataset

Authors: Christensen, Julia F.; Fernández, Andrés; Smith, Rebecca A.; Michalareas, Georgios; Yazdi, Sina H. N.; Farahi, Fahima; Schmidt, Eva-Madeleine; +2 Authors
Abstract

EmokineDataset Companion resources Paper Christensen, Julia F. and Fernandez, Andres and Smith, Rebecca and Michalareas, Georgios and Yazdi, Sina H. N. and Farahi, Fahima and Schmidt, Eva-Madeleine and Bahmanian, Nasimeh and Roig, Gemma (2024): "EMOKINE: A Software Package and Computational Framework for Scaling Up the Creation of Highly Controlled Emotional Full-Body Movement Datasets". Code https://github.com/andres-fr/emokine EmokineDataset is a pilot dataset showcasing the usefulness of the emokine software library. It featuers a single dancer performing 63 short sequences, which have been recorded and analyzed in different ways. This pilot dataset is organized in 3 folders: Stimuli: The sequences are presented in 4 visual presentations that can be used as stimulus in observer experiments: Silhouette: Videos with a white silhouette of the dancer on black background. FLD (Full-Light Display): video recordings with the performer's face blurred out. PLD (Point-Light Display): videos featuring a black background with white circles corresponding to the selected body landmarks. Avatar: Videos produced by the XSENS motion capture propietary software, featuring a robot-like avatar performing the captured movements on a light blue background. Data: In order to facilitate computation and analysis of the stimuli, this pilot dataset also includes several data formats: MVNX: Raw motion capture data directly recorded from the XSENS motion capture system. CSV: Translation of a subset of the MVNX sequences into CSV, included for easier integration with mainstream analysis software tools). The subset includes the following features: acceleration, angularAcceleration, angularVelocity, centerOfMass, footContacts, orientation, position and velocity. CamPos: While the MVNX provides 3D positions with respect to a global frame of reference, the CamPos [JSON](https://www.json.org/json-en.html) files represent the position from the perspective of the camera used to render the PLD videos. Specifically, their 3D positions are given with respect to the camera as (x, y, z), where (x, y) go from (0, 0) (left, bottom) to (1, 1) (right, top), and z is the distance between the camera and the point in meters. It can be useful to get a 2-dimensional projection of the dancer position (simply ignore z). Kinematic: Analysis of a selection of relevant kinematic features, using information from MVNX, Silhouette and CamPos, provided in tabular form. Validation: Data and experiments reported in our paper as part of the data validation, to support its meaningfulness and usefulness for downstream tasks. TechVal: A collection of plots presenting relevant statistics of the pilot dataset. ObserverExperiment: Results in tabular form of an online study conducted with human participants, tasked to recognize emotions of the stimuli and rate their beauty. More specifically, the 63 unique sequences are divided into 9 unique choreographies, each one being performed once as an explanation, and then 6 times with different intended emotions (angry, content, fearful, joy, neutral and sad). Once downloaded, the pilot dataset should have the following structure: EmokineDataset├── Stimuli│ ├── Avatar│ ├── FLD│ ├── PLD│ └── Silhouette├── Data│ ├── CamPos│ ├── CSV│ ├── Kinematic│ ├── MVNX│ └── TechVal└── Validation ├── TechVal └── ObserverExperiment Where each of the stimuli, MVNX, CamPos and Kinematic have this structure: ├── explanation│ ├── _seq1_explanation.│ ├── ...│ └── _seq9_explanation.├── _seq1_angry.├── _seq1_content.├── _seq1_fearful.├── _seq1_joy.├── _seq1_neutral.├── _seq1_sad....└── _seq9_sad. The CSV directory is slightly different, because instead of a single file for each seq and emotion, it features a folder containing a .csv file for each one of the 8 features being extracted (acceleration, velocity...).

Funded by the Max Planck Society, Germany. Under review.

Keywords

Emotion, Dance, Open Science, Motion Capture, Computer Vision, Aesthetics, Affective Neuroscience, Dataset

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 32
    download downloads 2
  • 32
    views
    2
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
32
2
Related to Research communities