Research
19.02.2025

Study shows support for sanctioning online hate speech

But support is subject to ideological bias, according to a survey co-led by Professor of Data Science and Public Policy Simon Munzert.

Online spaces offer a major platform for public debate – but also for hate speech. As policymakers and online platforms continue to grapple with whether and how to best regulate hateful language on the web, a new experimental study shows support for sanctioning online hate speech. In the study, conducted in Germany and the United States, most respondents were in favour of sanctioning severe forms of hateful speech, such as calls for violence. However, respondents were less likely to support measures that go beyond the online platform, and they were also less likely to support consequences for their own ideological camp.

The study, titled “Citizen preferences for online hate speech regulation”, was conducted by Professor of Data Science and Public Policy and Data Science Lab Director Simon Munzert, along with Richard Traunmüller (University of Mannheim), Pablo Barberá (University of Southern California), Andrew Guess (Princeton University), and JungHwan Yang (University of Illinois). 

“Tackling hate speech is not a problem that can be solved with technology alone. Ultimately, the question of what constitutes hate speech and what should be done about it is a normative one. As the clash between US Vice President JD Vance and European politicians at the Munich Security Conference has made impressively clear, people’s views vary greatly depending on cultural background and political convictions. This means that any efforts to curb hate speech online should also consider public perceptions of the issue. This is what we tried to do with our study,” explains Munzert.

Participants support sanctions for extreme hate speech – but only within the platform

The research consisted of a series of experiments with over 2,500 participants from Germany and the United States. The respondents were confronted with eight realistic social media dialogues. Each exchange began with a message and a subsequent response that varied in intensity, from harmless to extremely hateful. After the dialogue, the respondents were asked to rate how hateful the response was. They were then asked to choose an appropriate reaction to the comment, including measures to be taken by the platform, such as deleting the message, or consequences to be taken outside the platform, such as a fine or jail time.

The study results show that respondents were much more likely to support restrictions on freedom of expression when the hate speech was more severe. According to the results, extreme insults were 34 per cent more likely to elicit support for platform sanctions than moderately discriminatory posts. For extremely violent posts, respondents were 55 per cent more likely to support platform sanctions. However, consequences reaching beyond the platform, such as fines or jail time, were rejected even for extreme hate speech. A third of respondents in Germany and up to half in the United States opposed such sanctions.

cyber bullying concept. people using notebook computer laptop for social media interactions with notification icons of hate speech and mean comment in social network
Most study participants supported sanctions for extreme online hate speech, but only on the platform.

Research reveals a clear in-group bias

The study results also reveal a potential obstacle to the widespread support for hate speech regulation: the perception of hate speech is significantly influenced by one’s own group affiliation. The US study suggests that people are more tolerant of hate speech from their own ideological group and judge hate speech from another group more harshly. “This in-group bias means that people primarily reject regulation when it affects their own ideological group,” Professor Munzert explains. “On the other hand, people are more likely to support regulation when it affects the other side.” 

Taken together, the study casts doubt on the potential for automating hate speech moderation. If human judgments are significantly driven by cultural and political values, it is problematic to use them as a gold standard for automated moderation at scale, highlighting just how challenging content moderation can be. “Here, the evidence of our study reaches its limits,” Munzert says. “While people do in fact support moderation in particular cases, we don’t find evidence for a general societal consensus on how policymakers should regulate hate speech.”

The study “Citizen preferences for online hate speech regulation” was published in the journal PNAS Nexus on 12 February.

 

The Hertie School is not responsible for any content linked or referred to from these pages. Views expressed by the author/interviewee may not necessarily reflect the views and values of the Hertie School.

More about our expert

  • Simon Munzert, Professor of Data Science and Public Policy | Director, Data Science Lab