Security and Privacy BLOG@CACM

The Trouble with Facebook

Jason Hong considers how computer scientists, technologists could help guide regulation of the social media giant.

Posted
CMU Professor Jason Hong

November 12, 2021 https://bit.ly/3qu5C9H

The recent release of the Facebook papers by a whistle-blower has confirmed that leaders at the company have long known about problems facilitated by their social media, including disinformation, misinformation, hate speech, depression, and more. There’s been a lot of talk about regulators stepping in, with Facebook perhaps allowing them to inspect their algorithms for prioritizing and recommending content on Instagram and for Newsfeed. How can we as computer scientists and technologists help here? What kinds of questions, insights, or advice might our community offer these regulators?

Back to Top

Opening Up the Algorithms Is Not Enough

The first piece of advice I would offer is that, yes, Facebook should open up its algorithms to regulators, but that’s nowhere near enough. If regulators want to stem the spread of things like hate speech, disinformation, and depression, they also need to take a close look at the processes Facebook uses to develop products and the metrics they measure.

There will probably be some parts of Facebook’s algorithms that are understandable, such as code that blocks specific Web sites or prioritizes sponsored posts businesses have paid for. However, it’s likely that the core of Facebook’s algorithms use machine learning models that are not inspectable in any meaningful way. Most machine learning algorithms are large N-dimensional matrices that our poor primitive brains have no chance to comprehend.

It is also very likely that Facebook takes into account hundreds of factors to build their machine learning models, such as recency of a post, number of likes, number of likes from people similar to you, emotional valence of the words in the post, and so on. None of those factors are obviously wrong or directly linked to disinformation or other problems Facebook is facing.

From what was disclosed by the whistleblower and from what other researchers have investigated, one of the key issues is that Facebook’s algorithms seem to prioritize posts that are likely to have high engagement. The problem is that posts that get people angry tend to have really high engagement and spread quickly, and that includes posts about politics, especially misinformation and disinformation. But you can’t just block posts that have high engagement, because that also includes posts about the birth of new babies, graduations, and marriages, as well as factual news that may be highly relevant to a community.

As such, opening up the algorithms is a good and important first step, but is by no means sufficient.

Back to Top

Facebook has “break the glass” measures to slow the spread of misinformation and extremism. Why aren’t these features turned on all the time?

Facebook has previously disclosed that it has several safety measures that were turned on during the 2020 election period in the US. While details are not entirely clear, it seems that this safe mode prioritizes more reliable news sources, slows the growth of political groups that share a lot of misinformation, and reduces the visibility of posts and comments that are likely to incite violence.

If Facebook already has these measures, why aren’t they turned on all the time? If Facebook already knows that some of their online groups spread extremism and violence, why aren’t they doing more to block them? Hate speech extremism, misinformation, and disinformation are not things that only appear during election season.

Back to Top

Facebook is a Large-Scale Machine Learning Algorithm Prioritizing Engagement

The Facebook papers suggest that leadership at Facebook prioritizes engagement over all other metrics. From a business model perspective, this makes sense, since engagement leads to more time spent on site, and thus more ads that can be displayed, and thus higher revenue. One can think of the entire company itself as a large-scale machine learning algorithm that is optimizing its products and features primarily for engagement.

Part of the problem here is that things like engagement are easy to measure and clearly linked to revenues. However, engagement overlooks other important things, for example well-being of individuals, or thriving, feeling supported, feeling connected and well informed. These are much harder metrics to measure, but one could imagine that if product teams at Facebook prioritized these or other similar metrics, we would have a very different kind of social media experience, one that is much more likely to be positive for individuals and for society.

Back to Top

Facebook can’t be fully trusted to police itself. It needs to: open up Its algorithms and platform to qualified researchers.

The problem with the metrics proposed above is that I doubt it is possible to force a company to use new kinds of metrics. Instead, what I would recommend is that Facebook has to open up its algorithms, metrics, and platform to a set of qualified researchers around the world.

Facebook has repeatedly demonstrated that it cannot police itself, and has shut down many external efforts aiming to gather data about its platform. A one-time examination of Facebook also is not likely to change their direction in the long term, either. Furthermore, regulators do not have the expertise or resources to continually monitor Facebook. Instead, let’s make it easier for scientists and public advocacy groups to gather more data and increase transparency.

That is, internally, Facebook will probably still try to prioritize engagement, but if product teams and the Board of Directors also see public metrics like “hate speech spread” or “half-life of disinformation” published by researchers, they would be forced to confront these issues more.

Now, these scientists and public advocacy groups would need a fair amount of funding to support these activities. There are also many questions of who would qualify, how to ensure data integrity and prevent accidental leakage of sensitive data, and how to ensure transparency and quality of the analyses. However, this approach strikes me as the best way of changing the incentives internally within Facebook.

Back to Top

Conclusion

Facebook opening up its algorithms to regulators is not enough. Instead, my advice is to look at its existing safety measures and to require Facebook to open up its platform to qualified researchers. I will now pass the question to you: What questions and advice would you offer regulators?

Back to Top

Comments

Great post! I agree with the suggestion that opening up the algorithms to the regulators is not enough. More transparency to the regulators, researchers, and users will help improve the social platforms. Even with Facebook opening up its algorithms and platform, it would still be challenging to bridge the gap between the high-level desired policies for social good, and low-level algorithm design and software implementations. As a security researcher, I see great opportunities for interdisciplinary collaborations and responsibilities for shaping better future social networks for our next generations. When drafting the regulations for social networks, the regulators might want to hear the voices of computer science researchers, social science researchers, developers, and users to define enforceable regulations.

Yuan Tian

 

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More