Downloads provided by UsageCounts
Human computation is often subject to systematic biases. We consider the case of linguistic biases and their consequences for the words that crowdworkers use to describe people images in an annotation task. Social psychologists explain that when describing oth- ers, the subconscious perpetuation of stereotypes is in- evitable, as we describe stereotype-congruent people and/or in-group members more abstractly than others. In an MTurk experiment we show evidence of these bi- ases, which are exacerbated when an image’s “popular tags” are displayed, a common feature used to provide social information to workers. Underscoring recent calls for a deeper examination of the role of training data quality in algorithmic biases, results suggest that it is rather easy to sway human judgment.
social biases, linguistic biases, social cues, social stereotypes
social biases, linguistic biases, social cues, social stereotypes
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 27 | |
| downloads | 13 |

Views provided by UsageCounts
Downloads provided by UsageCounts