Powered by OpenAIRE graph
Found an issue? Give us feedback
ZENODOarrow_drop_down
ZENODO
Other ORP type . 2026
License: CC BY
Data sources: Datacite
ZENODO
Other ORP type . 2026
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

UIT-IC Replication Package

Authors: Heil, Sebastian; Grigera, Julián; Ekaterina, Pavlova; Gaedke, Martin;

UIT-IC Replication Package

Abstract

This is the Replication Package for the submission "Automated Estimation of Web Interaction Complexity based on UI Tests" submitted to the 26th International Conference of Web Engineering, Lyon, France. Overall Structure The package contains two parts: In implementation, there is a snapshot of the UIT-IC implementation as of the submitted version and in experiment, there is the data and instrumentation of the comparative user study. Implementation The implementation directory contains the a snapshot of UIT-IC representing the current implementation state at the time of submission (2025-08-26). It includes the source code of UIT-IC as CLI tool alongside the metrics implementations as well as configurations for the integration into a CI/CD pipeline. The guide to build and start the application is in the corresponding implementation/README.md file. Experiment The experiment directory contains the survey setup, the results of the user study, the results of the automated analysis of the UI Tests, the analysis scripts used and the analysis results. Survey Setup (questionnaire) The questionnaire for the participants follows the structure: Demographics General task description First task First task survey Second task Second task survey Conclusion All parts, except for the 3rd and 5th, i.e. the two tasks, are the same from participant to participant, and can be found in general_modules (questionnaire/general_modules) folder. The remaining 3rd and 5th modules are one of the prepared task_modules. Each task module represents a certain user interaction with one of the applications chosen for the experiment. The prepared import files for each module can be found in the corresponding folder. The folder complete_questionnaires provides import files for the full LimeSurvey questionnaires for all of the unique setups - there are 32 of them, each having the same general models and unique combination of task modules. How the tasks are distributed among surveys can be seen in survey_to_task_mapping.csv. Experiment raw results: (raw_data) This folder contains all raw data collected in the experiment: Survey responses raw_data/responses/ directly exported from each of the 32 LimeSurvey questionnaire setups, with an additional file that maps generated tokens to surveys (raw_data/responses/token_map.csv). The latter is generated as part of preparation process. Log files (raw_data/logs/) directly exported from the server where the applications were deployed (server-logs) and the logs of the execution of the UI tests (test-logs). UI Test Metrics (raw_data/metrics/) computed through the analysis of the UI tests of the four sample codebases. Experiment data analysis: (analysis) Contains all data related to the analysis of the gathered experiment data, including Python script for parsing and analysis, and artifacts generated during the analysis (CSV files, chart images, etc.) The folder has the following structure: Python script files for each of the analysis steps: preparation.py, correlation.py, models.py. Additional Python scripts for sub-tasks of the analysis process (analysis/helpers/) Results folder with structure according to the analysis process (analysis/results/) Corresponding README.md file (analysis/README.md) How to Reproduce our Experiments In order to reproduce the experiment setup, please follow the following general steps: Import all the LimeSurvey questionnarie files to a LimeSurvey instance. Use created survey IDs to generate tokens, links, and authentication data. Deploy all the applications and import generated access data. Deploy “Welcome page” with generated token data. Gather user data. Prepare the application’s test setup. Run UI_Tests_Evaluator on all of the projects. Use an analysis script to get information about gathered data.

Keywords

Automated Testing, Human-centric AI, IXD, UI, Complexity, UI Testing, Web User Interfaces, User Interaction

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average