Pressing Facebook for More Transparency and Accountability Around Content Moderation

Blog Post
Shutterstock
Nov. 16, 2018

On Tuesday, New America’s Open Technology Institute (OTI) and 79 organizations from around the world signed on to an open letter calling on Facebook CEO Mark Zuckerberg to implement more meaningful due process around the social network’s moderation of content on its platform.

Specifically, we are pushing Facebook and other internet companies to adopt the recommendations outlined in the Santa Clara Principles on Transparency and Accountability in Content Moderation, which were launched in May by OTI and a coalition of organizations, advocates, and academic experts who support the right to free expression online. That document outlines the minimum standards tech platforms should meet in order to provide adequate transparency and accountability around their efforts to regulate user-generated content and take action against user accounts that violate their platforms’ rules. The Principles demand three things:

  • Numbers: that is, detailed transparency reporting that explains how much content and how many accounts are affected by content moderation efforts;
  • Notice: users should get clear notice of when and why their content has been taken down or their accounts suspended; and
  • Appeals: there should be a robust process for affected users to appeal the platform’s decision to a human decision maker.

Our letter especially focuses on the need for a more robust appeals process at Facebook. The company rolled out a new process for appealing the takedown of individual posts earlier this year, but it only applied to a few specific categories of content—nudity and sexual activity, hate speech, or graphic violence. The letter urges the company to continue expanding that process to all content takedowns regardless of category, while repeating calls for greater transparency.

Yesterday, Facebook demonstrated new progress in providing that transparency by issuing the second edition of its Community Standards Enforcement Transparency Report, which provides data about Facebook’s removal of user content based on violations of its community standards. Facebook released a detailed version of those Community Standards in April, followed in May by the company’s first edition of the transparency report that was updated yesterday. Facebook is only the second company to attempt such a comprehensive transparency report on its content moderation practices—Google issued its own transparency report on YouTube’s Community Guidelines enforcement in April, though it noticeably didn’t cover any other Google products. As we said at the time, those were both important first steps toward meaningful transparency and accountability, but more needs to be done.

Facebook took a few more steps with this latest edition of its content moderation transparency report, introducing data on two new categories of content—bullying and harassment and child nudity and child sexual exploitation. In addition, the report highlights that Facebook intends in the future to share data on how much content the company has taken down by mistake, based on the data it’s collecting through its new appeals process. These are both welcome developments.

However, as emphasized by the Santa Clara Principles and OTI’s recently-published Transparency Reporting Toolkit focused on content takedown reporting, there are also a number of additional improvements and next steps that we think Facebook should implement in the future to make its reports even more useful.

  1. One number to rule them all. Right now, Facebook’s report is broken down by types of content like hate speech and spam, giving data on how prevalent that category of content is on the platform and how much of that category Facebook acted on. However, what is still lacking—in part because the report still doesn’t cover all the different types of content that is barred by Facebook’s community guidelines—is a single combined number that expresses the prevalence of guideline-violating content overall, and a single combined number indicating how many pieces of content were taken down overall. We need a unified overview of all moderation activity on the platform, not just silos of activity around specific categories of content.
  2. More details about automated content identification and takedowns. As companies quickly move to identify and in some cases even remove content based on automated rather than human decision-making, transparency and accountability around that decision-making become even more important. Facebook has offered some sliver of transparency on this point. In a recent blog post detailing its moderation of terrorist content, Facebook offered specific numbers about how much terrorist content was detected by automated tools that can identify and take down new uploads of older content that’s previously been found to violate the rules. The same post also detailed how much content was identified by (presumably machine learning-based) tools that flag new potentially-terrorist content for human review. Providing such data for all categories of content should be a key feature of the next transparency report.
  3. How many people are affected? Although it’s important to know how much content is taken down as part of Facebook’s content moderation operations, it’s also important to understand how many human beings have been silenced as a result. Right now, there’s no data on how many Facebook users are directly affected—neither data about how many users’ content is taken down, nor about how many accounts are temporarily or permanently suspended as a result of rule violations. We need both types of data to truly understand how many people these policies are impacting.
  4. How much content is flagged by users, versus how much is taken down? A great deal of the content taken down by Facebook was first identified thanks to flagging by users themselves. However, we don’t have a good idea of how many pieces of content are flagged, and how many of those are ultimately determined to be in violation of the rules. Having that data would be incredibly useful in understanding the volume of content that Facebook has to subject to human review, and how accurate or inaccurate users’ flagging behavior typically is. Both have implications for what policymakers and the public can reasonably expect from companies in terms of moderation, and may hold clues for how to better design moderation policies and procedures.
  5. Who flagged the content? Not all content flags come from regular users of a platform—some come from “trusted flaggers” and government agencies with direct lines to the companies via “Internet Referral Units” (IRUs). The possibility of informal censorship through government pressure via these channels is very real, and raises serious human rights concerns that call for even more transparency. That’s why we think that Facebook and other companies also need to disclose how many flags come from such sources, with as much specificity as possible. Automattic, the maker of WordPress, has taken a first step in this direction by reporting specific numbers about how many IRU referrals it receives, but much more could be done, and Facebook could be the first company to publish comprehensive numbers in this area. All the better if that data, in combination with the data we asked for in #4, allowed us to compare the accuracy of the different flagger populations—do governments inaccurately flag content more than users do, or less? We want to know!

Although there is always more that can be done, we want to recognize this latest report as another valuable step towards promoting greater transparency around how Facebook is regulating users’ speech, and we hope to see more companies issuing similar reports soon. As the power of internet platforms to decide what we are allowed to say online grows, the need for transparency and accountability from those companies grows too. Mark Zuckerberg himself issued a lengthy new Facebook post on Thursday afternoon discussing the future of content governance at Facebook. The post suggests that the company will continue to move in the direction of increased transparency and accountability, which we hope is the case. Indeed, the future of online free expression depends on it.

Related Topics
Content Moderation Transparency Reporting Platform Accountability