
This dataset accompanies the research article "Comparative Study Between Heuristic Evaluation Done by Human Experts vs NLP Agents". It includes all relevant files used and produced during the study, covering both human and AI-based heuristic evaluations. The contents are organized into the following categories: Raw Evaluation Data: Original heuristic evaluation responses provided by human experts and NLP agents, organized by heuristic principle and evaluation question. Supporting Materials for Analysis: Includes qualitative and quantitative analysis resources, such as tables, summaries and analysis results. TF-IDF Implementation: Jupyter Notebooks (.ipynb) containing code used to preprocess data, compute TF-IDF values, and generate term frequency visualizations. Supplementary Documents: PDF reports, figures, and tables used for interpretation and discussion of the evaluation results. Compressed Archives: Some materials are provided in .zip format for easier access and organization of grouped files. All files are provided in accessible, open formats to support reproducibility and further research in usability engineering, human-AI comparison, and heuristic evaluation methods.
Human-Centered AI (HCAI), Usability, Heuristic Evaluation, Natural Language Processing (NLP)
Human-Centered AI (HCAI), Usability, Heuristic Evaluation, Natural Language Processing (NLP)
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
