Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2025
License: CC BY
Data sources: ZENODO
ZENODO
Dataset . 2025
License: CC BY
Data sources: Datacite
ZENODO
Dataset . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Event-based Auditory Attention Decoding Dataset Aarhus University

Authors: Nguyen, Nhan Duc Thanh; Mikkelsen, Kaare; Kidmose, Preben;

Event-based Auditory Attention Decoding Dataset Aarhus University

Abstract

This dataset contains EEG (Scalp + in-ear) recordings from 24 normal hearing and native Danish-speaking subjects, who participated sequentially in the study with four paradigms. All recordings were conducted in an acoustically shielded listening room with 0.4 s of reverberation time, using 32 scalp electrodes and a left and right ear-EEG earpiece with six electrodes on each earpiece. Details of four paradigms are as follows: Paradigm 1 - word category oddball comprises 16 trials. In each trial, the subject was presented with a sequence of two different classes of spoken words: animal names and cardinal numbers, or color names and cardinal numbers from a loudspeaker. The subject was asked to pay attention to the target events, which were animal names or color names, and passively count them. Paradigm 2 - word category with competing speakers comprises 20 trials and uses similar sequences of discrete spoken words as in Paradigm 1. However, in this paradigm, two competing streams were presented simultaneously from two loudspeakers placed 60 degrees to the left and right. The subject was asked to pay attention to only the target events in one of the streams and count them while disregarding the other stream. Paradigm 3 - competing speech streams with targets comprises 20 trials and is similar to Paradigm 2. In each trial, the subject was presented with two competing streams of different continuous stories. In each stream, one class of words was predefined as target words, including animal names, human names, color names, and plant species. The subject was asked to attend to one of the two streams (left or right) and focus on the target words of that stream. At the end of the trial, the subject answered a question about the target words and received feedback. Paradigm 4 - competing speech streams without targets was designed to simulate a real-world scenario of selective listening in a setting with multiple sound sources. Two competing streams with two different stories from two loudspeakers were presented to the subject. The subject was instructed to attend to one stream while disregarding the other in each trial. Following each trial, the subject was probed with a question about the content and was provided with feedback. For details of the dataset and its structure, please refer to the README file.

This work was done at Center for Ear-EEG, Department of Electrical and Computer Engineering, Aarhus University.

Related Organizations
Keywords

Auditory Attention Decoding, EEG, Ear-EEG, ERP, signal processing, neuroscience

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Related to Research communities