Clear all

5 results found

reorder grid_view

Proposals for Improved Regulation of Harmful Online Content

June 17, 2020

In its early life the internet inspired optimism that it would improve the world and its people, but that has been supplanted by alarm about harmful, often viral words and images. Though the vast majority of online content is still innocuous or beneficial, the internet is also polluted by hatred: some individuals and groups suffer harassment or attacks, while others are exposed to content that inspires them to hate or fear other people, or even to commit mass murder.Hateful and harmful messages are so widespread online that the problem is not specific to any culture or country, nor can such content be easily classified under terms like "hate speech" or "extremism": it is too varied. Even the people who produce harmful content, and their motivations for doing so, are diverse. Online service providers (OSPs) have built systems to diminish harmful content, but those are inadequate for the complex task at hand and have fundamental flaws that cannot be solved by tweaking the rules, as the companies have been doing so far. The stakeholders who have the least say in how speech is regulated are precisely those who are subject to that regulation: internet users. "I've come to believe that we shouldn't make so many important decisions about speech on our own," Mark Zuckerberg, the CEO and a founder of Facebook, wrote last year. He is correct.Daunting though the problem is, there are many opportunities for improvement, but they have been largely overlooked. The widespread distress about it is itself an opportunity, since that means millions of people are paying attention, and it will take broad participation to build online norms against harmful content. Such mass participation is neither far-fetched nor unfamiliar: many beneficial campaigns and social movements have been born and developed thanks to mass participation online.This paper offers a set of specific proposals for better describing harmful content online and for reducing the damage it causes, while protecting freedom of expression. The ideas are mainly meant for OSPs since they regulate the vast majority of online content; taken together they operate the largest system of censorship the world has ever known, controlling more human communication than any government. Governments, for their part, have tried to berate or force the companies into changing their policies, with limited and often repressive results. For these reasons, this paper focuses on what OSPs should do to diminish harmful content online.The proposals focus on the rules that form the basis of each regulation system, as well as on other crucial steps in the regulatory process, such as communicating rules to platform users, giving multiple stakeholders a role in regulation, and enforcement of the rules. 

Counterspeech: A Literature Review

November 20, 2019

Every day, internet users encounter hateful and dangerous speech online, and some of them choose to respond directly in order to refute or undermine it. We call this counterspeech. Only a few studies have attempted to measure the effectiveness of counterspeech directly, and as far as we know, this is the first review of relevant literature.We've collected and reviewed related articles from a range of fields including political science, sociology, countering violent extremism, and computational social science. These articles do not all use the term "counterspeech," but they shed light on various features of successful counterspeech, for example, qualities that make speakers/authors more influential in online interactions or the extent to which pro- and anti-social behavior is contagious on the internet.

Dangerous Speech: A Practical Guide

December 31, 2018

No one has ever been born hating or fearing other people. That has to be taught – and those harmful lessons seem to be similar, though they're given in highly disparate cultures, languages, and places. Leaders have used particular kinds of rhetoric to turn groups of people violently against one another throughout human history, by demonizing and denigrating others. Vocabulary varies but the same themes recur: members of other groups are depicted as threats so serious that violence against them comes to seem acceptable or even necessary. Such language (or images or any other form of communication) is what we have termed "Dangerous Speech."Naming and studying Dangerous Speech can be useful for violence prevention, in several ways. First, a rise in the abundance or severity of Dangerous Speech can serve as an early warning indicator for violence between groups. Second, violence might be prevented or at least diminished by limiting Dangerous Speech or its harmful effects on people. We do not believe this can or should be achieved through censorship. Instead, it's possible to educate people so they become less susceptible to (less likely to believe) Dangerous Speech. The ideas described here have been used around the world, both to monitor and to counter Dangerous Speech.This guide, a revised version of an earlier text (Benesch, 2013) defines Dangerous Speech, explains how to determine which messages are indeed dangerous, and illustrates why the concept is useful for preventing violence. We also discuss how digital and social media allow Dangerous Speech to spread and threaten peace, and describe some promising methods for reducing Dangerous Speech – or its harmful effects on people.

Considerations for Successful Counterspeech

October 14, 2016

It may sometimes seem that the Internet is sullied by a relentless tide of hatred, vitriol, and extremist content, and that not much can be done to respond effectively. Such content cannot all be deleted, after all, since even if a statement, image, or user is deleted from one platform, there is always somewhere else to go.We have been pleasantly surprised, however, that our study of Twitter turned up numerous cases of effective counterspeech, which we define as a direct response to hateful or dangerous speech. Based on this first, qualitative study of counterspeech as it is practiced spontaneously on Twitter, we offer some preliminary suggestions on which strategies may help to make counterspeech successful.

Counterspeech on Twitter: A Field Study

October 14, 2016

As hateful and extremist content proliferates online, 'counterspeech' is gaining currency as a means of diminishing it. No wonder: counterspeech doesn't impinge on freedom of expression and can be practiced by almost anyone, requiring neither law nor institutions. The idea that 'more speech' is a remedy for harmful speech has been familiar in liberal democratic thought at least since U.S. Supreme Court Justice Louis Brandeis declared it in 1927. We are still without evidence, however, that counterspeech actually diminishes harmful speech or its effects. This would be very hard to measure offline but is a bit easier online, where speech and responses to it are recorded. In this paper we make a modest start. Specifically we ask: in what forms and circumstances does counterspeech - which we define as a direct response to hateful or dangerous speech - favorably influence discourse and perhaps even behavior?To our knowledge, this is the first study of Internet users (not a government or organization) counterspeaking spontaneously on a public platform like Twitter. Our findings are qualitative and anecdotal, since reliable quantitative detection of hateful speech or counterspeech is a problem yet to be fully solved due to the wide variations in language employed, although we made progress, as reported in an earlier paper that was part of this project (Saleem, Dillon, Benesch, & Ruths, 2016).We have identified four categories or "vectors" in each of which counterspeech functions quite differently, as hateful speech also does: one-to-one exchanges, many-to-one, one-to-many, and many-to-many. We also present a set of counterspeech strategies extrapolated from our data, with examples of tweets that illustrate those strategies at work, and suggestions for which ones may be successful.