Opinion
Architecture and Hardware Law and technology

Content Moderation Modulation

Deliberating on how to regulate—or not regulate—online speech in the era of evolving social media.
Posted
  1. Article
  2. References
  3. Author
  4. Footnotes
user and strident speakers, illustration

Debates about speech on social networks may be heated, but the governance of these platforms is more like an iceberg. We see, and often argue over, decisions to take down problematic speech or to leave it up. But these final decisions are only the visible tip of vast and mostly submerged systems of technological governance.

The urge to do something is an understandable human reaction, and so is reaching for familiar mechanisms to solve new problems. But current regulatory proposals to change how social network platforms moderate content are not a solution for today's problems of online speech any more than deck chairs were a solution for the Titanic. To do better, the conversation around online speech must do the careful, thoughtful work of exploring below the surface.

In September 2016, Norwegian author Thomas Egeland posted Nick Ut's famous and award winning photograph The Terror of War on Facebook. The image depicts a nine-year-old girl running naked and screaming down the street following a napalm attack on her village during the Vietnam War. But shortly after it went up, Facebook removed Egeland's post for violating its Community Standards on sexually exploitative pictures of minors.

Citing the photograph's historical and political significance, Egeland decried Facebook for censorship. Because of his moderate celebrity status, the photo's removal quickly became global news. Facebook was rebuked by the Norwegian prime minister and in a front-page letter titled "Dear Mark Zuckerberg" Aftenposten, one of Norway's main newspapers, chastised the site for running roughshod over history and free speech. In the end, Facebook apologized and restored Egeland's post.

The incident served as a turning point, both for the platforms and the public. Though sites like YouTube, Reddit, and Facebook had long had policies limiting the content users could post on their platforms, the enforcement of those rules was largely out of the public eye. For many users worldwide, The Terror of War's high-profile removal was the first time they confronted the potential deleterious effects of the site's censorial power. The incident was a foundational lesson not just in how difficult such decisions are but how high the stakes are if platforms get them wrong.

In turn, the public backlash was a turning point in how Facebook operationalized its policies and their enforcement. When a post is flagged for removal by another user, the post is put in a queue and reviewed by a human content moderator to determine whether it does or does not violate the site's Community Standards. Those content moderators are typically off-site workers in the Phillipines, India, or Ireland, reviewing incidents of flagged content in call centers 24 hours a day, 7 days a week.

The Terror of War photo violated Facebook's rule on nudity of minors, and thus removable, but it was also a picture of historical and newsworthy significance and thus an exception to removal. But historical value or news-worthiness are highly contextual and culturally defined—a difficult thing for someone from another culture, like a human content moderator might be, to recognize. It also introduced many to the opaque and unaccountable world of how private social media companies governed the public right of freedom of expression.

Since the Terror of War incident, we have had no shortage of reminders of the power of Big Tech and its lack of accountability to the users who rely on its services to speak and interact. Near-constant controversies about social media's impact on everything from political ads to violent extremism and from data protection to hate speech have led to various attempts at government regulation—some more successful than others.

In the U.S., the First Amendment prevents most legislative reform around privacy and hate speech. This is because privacy in America is typically understood as "protecting individuals from the dissemination of a particular piece of harmful information, or against particularly intrusive information collection," which places potential "privacy laws in tension with the First Amendment's protection of free speech …"1 In the realm of hate speech, while the First Amendment does not protect violence or incitement to imminent lawless action, it does protect speech that might be offensive and reprehensible to some.

As a result, much of the impetus for reform has come from Europe. Unburdened by the First Amendment, European jurisdictions have been able to enact broad privacy laws like the General Data Protection Regulation (GDPR), and state-specific anti-hate speech laws like the German NETZDG law.

That the First Amendment precludes sweeping bans on hate speech or dissemination of data, has not diminished the outcry in the U.S. Politicians and activists have largely focused their efforts on two goals. One is reforming Section 230 of the Communications Decency Act, a foundational law that prevents social media platforms from civil liability from suit from communications torts like defamation. The other is using antitrust law to break up, or at least rein in, technology companies. But the fervor for reform has not been matched with enthusiasm for the specifics. So far none of these proposals adequately address the technical realities of platforms' policies or their enforcement—an essential first step to take before tinkering with such powerful tools for democracy and speech.

Underlying these efforts is a claim that has gained significant traction over the last five years: that social media companies regulate speech in a politically biased way. Such charges come from both sides of the aisle, but frequently are missing key facts about the rules and processes behind keeping up or taking down content and accounts and grossly misunderstand the technical workings at play in large-scale commercial content moderation.

In the fall of 2017, for example, activists on the left raged when actress Rose McGowan's Twitter account was suspended during the start of the #MeToo movement. McGowan had posted a screencap of an email from Bob Weinstein meant to demonstrate his awareness of the sexual abuse perpetrated by his brother, Harvey Weinstein. When McGowan posted her suspension notification from Twitter on Instagram with the caption "Twitter has suspended me. There are powerful forces at work. Be my voice," the narrative of her suspension immediately turned into a story about the hammer of social media being unfairly wielded against women who speak truth to power and privileging the voices of those on the alt-right. "The game is rigged," wrote journalist Chuck Wendig, "and seeing @rosemcgowan getting suspended from Twitter, you don't have to ask for whom the game is rigged."

It is a common fallacy for humans to see a series of events occur and presume causality or even nefarious intent. But a closer examination of the events around such incidents adds nuance to these narratives. In the case of McGowan's suspension, her original screencap of Bob Weinstein's email had also included his personal phone number—which violated Twitter's rules prohibiting sharing other people's personal identifying information (also known as doxing). Ironically, this policy was the result of years of protest by the feminist community—many of whom had been victims of online abusers and trolls who had posted their home address or telephone number to encourage stalking or harassment.

McGowan's tweet was obviously not a call for harassment or abuse, but it also was obviously a violation of the letter of the policy—and her harsh punishment, suspension, was unfortunately what feminists and domestic violence advocates had long called for as remedy for violations of the policy. But few in the general public knew the intricacies of that rule—and even fewer knew the backstory that created it. Instead, McGowan's suspension immediately turned into a cause celebré about the hegemonic silencing of women. Even after Twitter explained its policy and its mistake in suspending McGowan's account, outcry continued over Twitter's privileging of conservative or alt-right voices.

A similar but different story is true of those claiming social media is biased against conservatives, which reverses the causality of why posts are remove and mistakenly attributes removal to political animus. For the last decade or more tech platforms have met with complaints for not taking down enough harmful speech. Many pieces of content—like adulation for Hitler or white supremacy or calls for violence—were early, relatively easy things for sites to ban.

But since 2016, conservative politicians and media figures in the U.S. have made claims of anti-conservative bias in social media. They assert that sites unfairly remove or reduce distribution of their speech (although multiple studiesa have shown no such discrimination2). In a Senate hearing on the issue in April 2019,b platform representatives described the policies and enforcement mechanisms that had resulted in the appearance of conservative bias.3 And conservative political commentators like Alex Jones and Diamond & Silk, and Republican politicians Sen. Josh Hawley and Sen. Ted Cruz, and Rep. Martha Blackburn have all made such claims either after they or their constituents have content removed from social media. "[T]ech companies … intentionally censor political viewpoints they find objectionable," claimed Cruz in a statement in 2019 after Twitter accidentally froze U.S. Senate majority leader Mitch McConnell's campaign account.


Near-constant controversies about social media's impact on everything from political ads to violent extremism and from data protection to hate speech have led to various attempts at government regulation—some more successful than others.


What is missing from the story, again, are the details. Hate speech comes in many different flavors, and rather than have long lists of specific ideas or language that should come down, platforms developed specific rules with elements—essential requirements that content must have to violate the policy. "We define hate speech as a direct attack on people based on what we call protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability," states Facebook's policy rationale on Hate Speech in its Community Standards. This policy or a variation of it, has been in place since at least 2008, so it is not that Facebook is creating policies to ban Alex Jones because he is conservative, it is that when Alex Jones addressed Russia investigation special counsel Robert Mueller on his show and imitated firing a gun while saying, "You're going to get it, or I'm going to die trying" he ran afoul of Facebook's long-established standards on hate speech and incitement to violence. According to many people who work in Trust and Safety at these platforms, the reason the public is hearing more about more conservatives being removed from social media is not because of bias, but because a huge increase in the volume and extremism of "conservative" content.

In September 2019, cybersecurity expert Bruce Schneier gave a talk at the Royal Society in London. It was titled "Why technologists need to get involved in public policy?" but it could just as easily have been called "why public policy needs to get involved in technology." At the crescendo of his 15-minute speech, Schneier argued that "Technologists need to realize that when they're building a platform they're building a world … and policymakers need to realize that technologists are capable of building a world." Schneier was ostensibly talking about cybersecurity, but his point speaks to the chasm in the middle of almost every technology debate raging today—including one of the most visible: the debate over how to regulate (or not regulate) online speech in the age of social media.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More