Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset
Data sources: ZENODO
addClaim

Saccade Transition Datasets (COGAIN2026)

Authors: Engl, Fabian; Prof. Dr. Mottok, Juergen;

Saccade Transition Datasets (COGAIN2026)

Abstract

About this repository This data repository contains multiple datasets derived from an eye-tracking study involving 121 participants. The datasets contain differently encoded saccade transitions and were used in the publication "Predicting User Perception based on Stimuli-Independent Saccade Transitions" published at the ACM Symposium on Eye Tracking Research & Applications (ETRA2026). In the study, participants interacted with six different websites. Each participant was first assigned a specific task and given up to three minutes to locate the required information. After completing the task, participants were allowed to freely browse the website for the remaining time. The goal of the study was to analyze user interaction behavior and investigate how saccade transition patterns can be used to predict users' perceived usability and user experience (UX) ratings independently of the presented stimulus. Further methodological details and experimental design can be found in the associated publication (see section Conference below). Abstract Scanpaths and eye movements provide insight into how users perceive and interact with digital products. However, most studies assess user states using stimulus-dependent metrics, like fixations or areas of interest (AOIs). This user study examines whether stimulus-independent saccadic transitions — gaze movements not tied to predefined stimulus elements — carry predictive information about user states and experiences. To do so, eye-tracking data and perceived usability and UX ratings were obtained from 121 participants interacting with six websites. Saccadic transitions were extracted from the scanpaths and analyzed using machine learning models to identify transition patterns predictive of the user ratings. Results show that models predict perceived usability and UX most accurately when saccade transitions are grouped into the eight inter-cardinal directions and further differentiated by median saccade length. This demonstrates that even brief, often-overlooked gaze shifts within the stimulus can provide valuable insight into how users perceive websites. Data Explanation This repository includes two main types of files: encoded saccade transition datasets used for machine learning, and a PDF file illustrating the encoding schemes (only up until 16 directional groups for readability reasons) DatasetsDataset files follow the naming convention: ETRA2026_NGRAM_{sections}_{threshold}_Linear, where sections are the number of directional divisions used in the encoding scheme and threshold refer to the percentile threshold applied to differentiate between short and long saccades Encoding SchemeAll four encoding schemes are based on the approach proposed by Bulling et al. (2010), which is described in detail in the associated publication (see Section Conference below) . IFollowing the encoding scheme, saccades are first categorized into a predefined number of directional groups. Within each group, saccades are further distinguished based on their length. The file "Encoding_Schemes.pdf" provides a visual overview of the four encoding variants, which differ in the number of directional sections. Machine Learning Labels and MetadataEach dataset contains the following columns: ParticipantID: Identifier of the participant (e.g., P001) Stimulus: Indicates the website associated with the recorded saccade transitions. Possible values include DAV1, DAV2, DAV3, Water1, Water2, and Water3, referring to three websites each from the German Alpine Association (DAV) and German water providers. Labels:The labels are derived from short versions of the User Experience Questionnaire (UEQ) and AttrakDiff, which measure perceived usability and user experience. Specifically: Pragmatic Quality (PQ) which reflects perceived usability Hedonic Quality (HQ) which reflects perceived user experience These are provided in four separate columns: LABEL: UEQ PQ LABEL: UEQ HQ LABEL: AttrakDiff PQ LABEL: AttrakDiff HQ Acknowledgments This study was conducted as part of the EU-funded project "Digital Innovation Ostbayern (DInO)". DInO is part of the European Digital Innovation Hub (EDIH) program and is funded by the European Union (Project Reference 101083427) and the European Funds for Regional Development (EFRE) (Project Reference 20-3092.10-THD-105). This eye-tracking study was approved by the Joint Ethics Committee of the Bavarian Universities (GEHBa) with the reference number GEHBa-202312-V-155-R. The Tobii Pro Fusion eye-trackers were funded by the ‘German Federal Ministry of Research, Technology and Space’ (BMFTR) through the granting of the 'Bavarian State Budget' (ZD.B) (FKZ: 16-1541). Contact Information If you have any questions feel free to reach out to the owner of this repository by mail: fabian.engl(a)oth-regensburg.de

Powered by OpenAIRE graph
Found an issue? Give us feedback