Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2019
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2019
License: CC BY
Data sources: ZENODO
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2019
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2019
License: CC BY
Data sources: ZENODO
versions View all 2 versions
addClaim

Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

Authors: Ruohan Zhang; Zhuode Liu; Lin Guan; Luxin Zhang; Mary Hayhoe; Dana Ballard;

Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

Abstract

We introduce a large-scale dataset of human actions and eye movements while playing Atari videos games. Human subjects play games in a frame-by-frame manner to allow enough decision time in order to obtain near-optimal decisions. For every game frame, its corresponding image frame, the human keystroke action, the reaction time to make that action, the gaze positions, and immediate reward returned by the environment were recorded. The gaze data was recorded using an EyeLink 1000 eye tracker at 1000Hz. The human subjects are amateur players who are familiar with the games. The human subjects were only allowed to play for 15 minutes and were required to rest for at least 15 minutes before the next trial. We collected data from 4 subjects, 16 games, 175 15-minute trials, and a total of 2.97 million frames/demonstrations. 1. meta_data.csv: meta data for the dataset. Data fields: Game: String. Game name. TrialNumber: Integer. One can use this number to locate the associated .tar.bz2 file and label file. SubjID: Char. Human subject identifiers. Load: Integer. 0 indicates that the game starts from scratch. If this field is non-zero, it means that the current trial continues from a saved trial. The number indicates the trial number to look for. FrameAveraging: Boolean. The game engine allows one to turn this on or off. When turning on (TRUE), two consecutive frames are averaged, this alleviates screen flickering in some games. FPS: Integer. Frame per second when an action key is held down. NumberOfFrames: Number of image frames in the .tar.bz2 repository. AverageValError: Float. Eye-tracking validation error at the end of each trial in visual degree (1 visual degree = 1.44 cm in our experiment). BestScore: Integer. The highest game score obtained from this trial. 2. *.tar.bz2 files: contains game image frames. The filename indicates its trial number. 3. *.txt files: label file for each trial. Data fields include: frame_id: String. The ID of a frame, can be used to locate the corresponding image frame in .tar.bz2 file. episode_id: Integer (not available for some trials). Episode number, starting from 0 for each trial. A trial could contain a single trial or multiple trials. score: Integer (not available for some trials). Current game score for that frame. duration(ms): Integer. Time elapsed until the human player made a decision. unclipped_reward: Integer. Immediate reward returned by the game engine. action: Integer. See action_enums.txt for the mapping. This is consistent with the Arcade Learning Environment setup. gaze_positions: Null/A list of integers: x0,y0,x1,y1,...,xn,yn. Gaze positions for the current frame. Could be null if no gaze. (0,0) is the top-left corner. x: horizontal axis. y: vertical. If you use the Atari-HEAD in your research, we ask that you please cite the following: Zhang, Ruohan, Zhuode Liu, Luxin Zhang, Jake A. Whritner, Karl S. Muller, Mary M. Hayhoe, and Dana H. Ballard. "AGIL: Learning attention from human for visuomotor tasks." In Proceedings of the European Conference on Computer Vision (ECCV), pp. 663-679. 2018. @inproceedings{zhang2018agil, title={AGIL: Learning attention from human for visuomotor tasks}, author={Zhang, Ruohan and Liu, Zhuode and Zhang, Luxin and Whritner, Jake A and Muller, Karl S and Hayhoe, Mary M and Ballard, Dana H}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, pages={663--679}, year={2018} }

Related Organizations
Keywords

Videos game, Eye tracking, Atari, Imitation learning, Learning from demonstration, Visual attention

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 193
    download downloads 665
  • 193
    views
    665
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
193
665