
pmid: 36749721
pmc: PMC9963596
In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents ( N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.
/dk/atira/pure/core/keywords/tedcog, SDG 16 - Peace, 070, Emotions, Social Sciences, moral dilemma, Social and Behavioral Sciences, Morals, /dk/atira/pure/core/keywords/cognitive_science, /dk/atira/pure/core/keywords/tedcog; name=TeDCog, name=Cognitive Science, freedom of expression, Humans, Speech, misinformation, conjoint experiment, Communication, Politics, 320, content moderation, Justice and Strong Institutions, disinformation, /dk/atira/pure/core/keywords/cognitive_science; name=Cognitive Science, name=TeDCog, harmful content, online speech, Social Media
/dk/atira/pure/core/keywords/tedcog, SDG 16 - Peace, 070, Emotions, Social Sciences, moral dilemma, Social and Behavioral Sciences, Morals, /dk/atira/pure/core/keywords/cognitive_science, /dk/atira/pure/core/keywords/tedcog; name=TeDCog, name=Cognitive Science, freedom of expression, Humans, Speech, misinformation, conjoint experiment, Communication, Politics, 320, content moderation, Justice and Strong Institutions, disinformation, /dk/atira/pure/core/keywords/cognitive_science; name=Cognitive Science, name=TeDCog, harmful content, online speech, Social Media
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 97 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 1% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 1% |
