News
Computing Profession

Propaganda Bots vs. Transparency

Posted
The role computational propaganda played in the 2016 U.S. presidential election is still being investigated and debated.
Identifying bots that propagate falsehoods on social media "is an arms race," says the University of Southern California's Emilio Ferrara.

There is no doubt digital communications platforms have been exploited by propagandists in efforts to influence public opinion in recent elections worldwide.

The good news is, researchers are learning more about how disinformation is being spread, and electorates seem to be catching on to the fakery, to some extent. The bad news is, the propagandists aren't halting, whether they represent clumsy and amateurish meme creators or sophisticated agents capable of controlling hundreds of thousands of automated bot accounts.

"This is an arms race," said Emilio Ferrara, assistant research professor of computer science at the University of Southern California, adding that those who propagate the falsehoods "will always have more resources, more interests, and more capabilities and time to devote to what they are doing than a bunch of academics."

Ferrara and his colleagues, however, play a critical role in highlighting the current state of digital propaganda. Ferrara was actually ahead of the vast majority of academic investigators and journalists in discerning the role of propaganda-spreading bots (he said his paper "Social bots distort the 2016 U.S. Presidential election online discussion" was the only peer-reviewed paper on the subject that appeared prior to the 2016 U.S. presidential election). He continued his work on computational political propaganda by publishing findings on the 2017 French election and is currently compiling information from the German parliamentary election in September, in which incumbent Chancellor Angela Merkel retained her seat despite a challenge from the far-right Alternative for Germany (AfD) party.

Ferrara is not alone in ramping up his efforts around political bots following the unexpected election of Donald Trump to the U.S. presidency, in which the true role computational propaganda played is still being investigated and debated. Researchers from Germany, the U.S., and the U.K. have been in the forefront of studying how people have reacted to the propaganda campaigns. Among those who have already published on the German election is Lisa-Maria Neudert, a researcher at the University of Oxford's Computational Propaganda Project. A paper published by Neudert and colleagues Bence Kollanyi and Philip Howard, the project's principal investigator, found Germans on the whole were far more discerning than Americans were in deciding from where to get news; for instance, German social media users shared four links to professional news sources for every one link to junk news prior to the September election, whereas U.S. voters' professional-to-junk site ratio was 1:1.

Neudert said the German public was well aware of the rise of junk news and slanted information, much of it spread by bots, leading up to the German election. That knowledge, she said, was reinforced by both the legitimate German media and mainstream political parties.

"I think the work of the media and the work of politicians in Germany are to be thanked here, because they had really great public awareness," she said. Among the factors contributing to that awareness, she said, was an address Merkel gave to the German parliament in November 2016 warning about the potentially outsized influence wielded by bots, and a pledge by the major German political parties (including the Social Democratic Party, the Christian Democratic Union, and the Green Party) to refrain from using social bots in campaigning (the AfD, according to Neudert's research, did not join in the pledge, though it later distanced itself from a statement that it might use them).

Uncovering Digital Lies

Such awareness did not stop purveyors of fake news and bots from spreading fear and doubt in the days leading up to the September election, however, according to researchers at the Atlantic Council's Digital Forensic Research Lab. For instance, the researchers found a Twitter user called @von_Sahringen who claimed to be a left-wing election helper who said that AfD ballots would be invalidated.

However, the researchers said, a reverse image search of the @von_Sahringen account suggests that it is a fake account, rather than a genuine user: its profile picture is a Pakistani actress named Aiza Khan. The researchers also said the account's activity profile was unusual. "It was created in February, but according to a machine scan of its posts, only began posting high quantities in August, in the build-up to the election," they wrote.

Another DFRLab Google reverse image search, performed in partnership with the German newspaper BILD, revealed an image of a supposed victim of sexual assault on New Year's Eve 2016 in a crowd of men. It was actually a superimposed head of a model on the body of a television news reporter who had been assaulted in Egypt in 2011. "Further investigation showed that the image initially emerged from the anti-Semitic community in 2015 before pro-AfD Facebook and Twitter pages, including bots and actual representatives of the party, began sharing it," the lab posted. "The AfD's use of this manipulated photo highlights its focus on galvanizing anti-migrant and anti-Islam supporters ahead of the vote. It also shows the lengths far-right groups will go to produce eye-catching fakes."

Social media platforms falling short?

Though the work of entities such as the DFRLab and other fact-checking organizations is beneficial in detecting computational propaganda, such detection is reactive; fake news is exposed only after it has spread, and it is extremely difficult to discern its origin. Researchers say they would like to have more data from dominant platforms that enable the spread of disinformation, such as Facebook and Twitter. For instance, Ferrara said, he cannot get data such as Internet Protocol headers to try to track down a given post's provenance.

"For the researchers in the public domain, including myself and pretty much everyone else who is actually trying to understand what is going on right now, it's impossible for us to do attribution," he said. "If you rely, for example, on Twitter data, that data does not provide IP addresses or all sorts of things you would need to do forensics or attribution. We can't do that and have never heard of anyone in the research community who claims they can do that sort of attribution. You can test a number of hypotheses and highlight all sorts of different anomalies. To go to the next step, you either have to collaborate with law enforcement or the various agencies that are currently carrying out investigations, or you need someone on the inside."

Neither Facebook nor Twitter replied to requests for interviews for this story; each has posted blog entries outlining their efforts to counteract the disinformation campaigns on their platforms, from deleting accounts shown to be fronts for bots to refusing ad dollars from known organs of the Russian propaganda campaign. That's not nearly enough, according to Neudert and Ferrara.

"When the fundamentals of democracy are at risk, researchers need to have access to ascertain the extent and scope of the problem in order to help develop countermeasures," Neudert said.

Ferrara said the continuing "arms race" means the disinformationists' campaigns are becoming ever more sophisticated. In a paper presented at the International AAAI Conference on Web and Social Media in May, Ferrara outlined how advanced artificial intelligence is making bot detection increasingly difficult.

"We have some clues, and they are all obvious if you spend a little time looking at the bots over many years," he said. "Five years ago, these bots were trivial. They were repeating stuff at an incredibly high rate. Those are the kind of bots Twitter learned how to find, and the type of stuff they claim they remove, and that's why they think they are doing a good job. The reason why I think they are not doing a great job and why hundreds of other people in the research community believe the same is that probably everyone at this stage would be able to detect those simple bots; detecting the more sophisticated ones that emulate how people behave is much more complicated."

Ferrara also said the consequences of failing to stem the tide of false information need to be more severe.

"If companies have no reason to go after this content, if you're not paying a price for not policing that activity, then we're doomed. There's no way this arms race can be won."

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More