Downloads provided by UsageCounts
The data collected for the purposes of the work: "A sizable fraction of workers provides hasty, wrong answers in crowd-sourced tasks". The collected data represent the answers reported by crowdsourcing workers on Amazon Mechanical Turk (AMT). Three type of tasks, HITs where published on AMT, namely, Color, Majority and Count. Each file represents the deanonymized answers of 100 workers reporting their answers to the each task. The experiments where repeated after seven months and those data are included as well. The ending in the file name *batch2, reveals whether data refer to the first or second round of experimentation. In total the dataset includes 399 entries x 12 features and 200 entries x 16 features.
This work was supported in part by the Regional Government of Madrid (CM) grant EdgeData-CM (P2018/TCS4499) cofunded by FSE & FEDER, the Spanish Ministry of Economy and Competitiveness grant FIS2015-64349-P, the FEDER/Ministry of Science, Innovation and Universities-State Research Agency grant TIN2017-88749-R, and the NSF of China grant 61520106005.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 5 | |
| downloads | 2 |

Views provided by UsageCounts
Downloads provided by UsageCounts