News
Computing Profession

Driving Hate from The Internet: Noble or Futile?

Posted
Although social media giants have been criticized for the spread of malicious and violent content online, they actually are dedicated to mitigating harm.
Cracking down hateful Internet content.

Hardly had the ink dried on the Christchurch Call to Action, the global pledge to fight what its creators termed "terrorist and violent extremist content online" in response to attacks on two mosques in Christchurch, New Zealand, when yet another mass murder took place, this time in El Paso, TX.

The perpetrators of both murders posted "manifestos," still available online several days after the El Paso shootings, describing why they felt compelled to kill innocent people indiscriminately. The Christchurch shooter livestreamed his attacks on Facebook for 17 minutes.

In the 80 days between the signing of the Christchurch Call and the El Paso shootings, members of the technology community including academic researchers, social media giants, and online safety organizations asked profound questions of themselves and each other about how to develop the technological and human capabilities to ferret out content deemed harmful by civilized consensus.

In the days after the El Paso murders (as well as another mass killing in Dayton, OH, just a few hours afterward), as the websites and bulletin boards hosting this kind of content found themselves orphaned by their upstream providers, only to be adopted by others within hours, it became clear how very far the world is from defining what hate is, much less eliminating it.

Early days for good intentions

If there were an overarching ideal to strive for online, it might be that expressed by Martin Cocker, CEO of Auckland, NZ-based Netsafe, a private not-for-profit organization that provides New Zealanders tools, support ,and advice for managing online challenges.  "There's been a movement in online safety going on for a long time of developing the concept of digital citizenship," Cocker said, "and a key part of that is being able to accurately interpret information and recognize when information is being presented to you falsely or inaccurately. But humans are complex, and they often interpret information from their own personal starting position, so it won't be an easy problem to solve."

Although social media giants such as Facebook and Twitter have received the brunt of criticism for the spread of malicious and violent content online, thus fueling more expressions and acts of hatred, Cocker and researchers such as Ohio State University communications professor R. Kelley Garrett say the behemoths are dedicated to mitigating harm. For instance, shortly after the Christchurch attacks, Facebook deputy general counsel Chris Sonderby posted on the company blog outlining the steps Facebook took afterward to remove both the original and shared copies from the site.

"Around the world, there's been a huge focus on the big players in light of these attacks," Cocker said. "But the reality is, they were the organizations that were actively engaged with New Zealand law enforcement and other people here to try and remove the content, whereas there were another 50 or 60 sites hosting the content who were either ignoring requests to remove it or were part of a push to keep that content alive."

Garrett is the recipient of one of 12 grants for research in the Social Science Research Council (SSRC) Social Media and Democracy project, which will use Facebook data to try to analyze how people react to and disseminate information on the platform. He said he cannot speak to what Facebook's corporate-level intentions are regarding the extent to which the company must go to fight misinformation and inflammatory content, but said his interactions with Facebook data scientists thus far shows nothing but sincerity.

"The people on the ground, the people I interface with and who are doing the data science and design work, are really concerned about the problems the research community and press have identified, and they are very enthusiastic about trying to find ways to create opportunities to collaborate and to make the data they have more available," Garrett said. "Many of them started outside of Facebook; many started as academics. So they have an appreciation for the value of social science in the public interest. I think they are very excited about the possibility that stuff they have been able to do internally, but can't talk to anyone about, may eventually be something that can be used more generally."

However, Garrett said, getting data into the hands of the researchers was proving more problematic than originally thought.

"After we submitted our proposal, we got some guidance from Facebook by way of SSRC saying they had grown concerned about the privacy implications of the dataset as it had been originally specified, so they were doing a couple things to try to rein in some of those concerns."

Among the steps Facebook took to safeguard users' data, Garrett said, were some additional restrictions on the types of data that would be available, as well as the use of differential privacy (a technique of adding quantitatively acceptable noise to datasets) to provide greater protection.

"Because this is very new stuff for them and for us, they have been slow to roll it out. We were out there in June to be introduced to the differential privacy idea system they are putting in place, so we had the opportunity to see how the system works, but we don't yet have the opportunity to use the real data."

While the original SSRC request for proposal stipulated the research needed to be completed within a year, Garrett said he thought the delay in researchers receiving live data made that timeline "a stretch."

The usability of other Facebook datasets also has been problematic. In April, Mozilla researchers wrote that Facebook's ad archive API did not meet minimal standards for studying political ads on the platform. Other media outlets, including The New York Times and Vice, have repeated those concerns.

Many "other hands"

Buggy libraries, vexatious though they may be, are just one obstacle to finding a way to mitigate the intertwined problems of online misinformation and harmful behavior. The sheer volume of data being released onto the Internet, and the existential disconnect between social media platforms' architectures and society's need for vetted information, are creating the well-publicized perils, but also opportunities, according to Mark Little, co-founder and CEO of Dublin-based startup Kinzen. The Kinzen app uses a combination of professional curation, user feedback, and machine learning to create trusted news feeds.

"The problem we are trying to solve is the recommender systems of social media are optimized for outrage and clicks, and optimized for advertising," Little said. "As a result of that, the algorithms powering the recommender systems on platforms like YouTube, and to a great extent Facebook, are being optimized for the wrong signals."

Both Little and Cocker say the sheer volume of content being released and the nascent capabilities of tech-only solutions means human curation will be a crucial part of the equation for some time.

Kinzen, for instance, employs three full-time journalists and a community editor to develop a credibility scale for content platforms.

"We will start assigning some level of extra power in the way we circulate content to the user based on things like credibility, originality, and diversity of opinions they quote," Little said. "We are not building another platform, another aggregator. Our job is to essentially build data services and content feeds for established publishers. We call it personalization as a service, promoting the user rather than manipulating them. We're really attempting to rewire the recommender systems that power information using the combination of good old fashioned journalism and state of the art artificial intelligence—essentially, become the human in the machine."

Cocker said another avenue of research that could improve the performance of systems designed to ferret out harmful content involves the user reporting process. For instance, Netsafe's home page features a prominent incident-reporting button. Facebook places its reporting function within each individual post, among a menu of options a user has after clicking on small buttons in the upper right of each post, or on a question mark icon in the upper right of the screen. Whether the more intuitive Netsafe design (combined with a direct machine-to-machine reporting function) would ever be practical or even desired depends on numerous factors, Cocker said, though finding some way to speed up notice of episodes such as shootings is critical.

"One of the ongoing conversations between Netsafe, others agencies like us around the world, and the major players is how those processes can be streamlined, because at the moment, there are human steps between somebody reporting to Netsafe and Netsafe reporting content to those players," he said. "In New Zealand, we provide function under the Harmful Digital Communications Act, which is supposed to work by ideally making the individual who produced harmful content responsible for removing or moderating it. So we don't necessarily want to report through to Facebook; we could prefer the person who produced it is made aware and responsible for acting on it.

"So, right now, we wouldn't want to just plug that straight into their systems. But as a general rule, clearly streamlining the reporting would make a difference. And when you talk about things like the Christchurch attacks, the speed between people finding content and getting that content in front of those players and recognized by their systems for takedown is critical. The difference between five minutes and 10 minutes can be the difference between stopping something from circulating and having it circulate for all time."

Devil's advocates have help

A common denominator in both the Christchurch and El Paso attacks were manifestos posted on the 8Chan Internet chat board. The site has become synonymous with white supremacist and conspiracy theory advocates and, until several days after the El Paso shooting, remained accessible to nearly anyone who could find its URL, even though it had been removed from Google search results.

Within days, however, one of the site's upstream service providers, Cloudflare, announced it would be terminating services with 8Chan. Shortly thereafter, another upstream provider, Epik, said it anticipated providing content delivery services to 8Chan, but quickly backtracked and announced it would not do so after one of its upstream providers cut services to it.

While removing violent or inciteful content or preventing it from being successfully uploaded in the first place might appear to be a universally sought-after result, some researchers say wider exposure, combined with some sort of distribution mitigation, may actually help law enforcement.

"With lone-actor events, we have found they may be alone in the execution of an event, but there is very often part of a wide network of co-ideologues online," said Joe Whittaker, a researcher at Swansea University and the International Center for Counter-Terrorism. "It's a much harder challenge if you don't have a network of people to investigate, but lone actors quite often do leave more clues than you might suspect in advance. The AI side of it for detection of this kind of content is important, but then we enter back into the problem that if all it is going to do is remove the content before it's even been uploaded, we may be harming law enforcement in some way."

However. the nuances of deciding what sort of content to ban, de-emphasize, or shuttle into barely accessible carrels for research purposes remain difficult. In addition, just as the actions surrounding removing 8Chan from the easily accessible Internet were revealed to be an awkward minuet of self-interested companies responding to the antagonistic motivations of widespread outrage and doctrinaire free speech philosophies, proactively monitoring hate content on a site-by-site basis may be technically infeasible and philosophically impossible.

"On the one hand, if Facebook were to take a stronger editorial role in making decisions, it could help address some of the problems we are talking about here," Garrett said. "The challenge, then, is do you really want a single outlet with that much power? If there were a single paper of record for the world, would you really want an editorial board making those kinds of decisions? I'm not trying to cling to one side or the other. There are clearly strengths to what editors do, but giving that much editorial power to an organization with the scale of Facebook is potentially dangerous."

One possible solution may be better cooperation between major players, facilitated by public policy that offers safe harbors. In written testimony before a U.S. House of Representatives subcommittee on intelligence and counter-terrorism, Stanford University researcher and former Facebook executive Alex Stamos recommended a coordinating body be created to minimize hate speech and content from fringe sites such as 8Chan.

"The large tech firms do not (and should not) have the ability to ban such sites from existing," Stamos said. "What they can do, however, is reduce the chance that content on their platforms can be used as a jumping-off point for these communities. A coordinating body could decide to maintain a list of sites that could then be voluntarily banned from the major social media platforms. As of today, Facebook, Google, and Twitter are deciding on a per-page basis of whether to allow links to these sites. A global ban on these domains would be consistent with steps they have taken against malware-spreading, phishing, or spam domains, and would allow those sites to exist while denying their supporters the capability to recruit new followers on the large platforms."

Whether or not such a body will ever be formed, Little said he detects sentiment among legitimate e-commerce players to start cracking down on the lawlessness masquerading as liberty.

"The biggest demand for change, for reform, is actually coming from advertisers," Little said, "because they are sick and tired of spending billions of dollars to be associated with crazy conspiracy theorists on YouTube. We are seeing an emerging thirst from publishers, traditional news businesses but also marketers and advertisers, who really want to be associated with good quality information. So we can see ourselves working with platforms who want to change their recommender systems, who want to import some of this quality human decisionmaking into the way they recommend content.

"In 2010, there were three of us sitting in an attic in Dublin looking at 500 videos coming from Egypt, Syria, and Libya by hand. Today we can analyze 250,000 pieces of content a week and have some ability to start structuring it, to make some sense of it. We're a manual car in first gear just starting out. We hope to step up a gear in the next year or two."

 

Further reading

Advancing self-supervision, CV, NLP to keep our platforms safe:
An update by Facebook AI researchers of the latest technological approaches to reducing hate and harmful content online.
http://bit.ly/2pKd8Qg

 

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More