Digging into Facebook’s Fourth Community Standards Enforcement Report
Despite Progress Toward Transparency and Accountability, More Needs to be Done
Blog Post
Twin Design / Shutterstock.com
Nov. 26, 2019
On November 13, Facebook released the fourth edition of its Community Standards Enforcement Report (CSER), which includes data and insights on how Facebook takes action against content that violates its community standards. The latest version of the report makes some positive strides toward providing greater transparency and accountability around Facebook’s content moderation operations, particularly across different Facebook products. However, the report still fails to include some key metrics that are vital for understanding the scope and scale of the company’s overall content moderation efforts.
As in previous versions of the CSER, this edition provides data under five key metrics for a range of content categories:
- Prevalence: How many views of Community Standards violating content the platform was unable to prevent.
- Content Actioned: How much content the platform took action on.
- Proactive Rate: How much violating content Facebook found and addressed before users reported it.
- Appealed Content: How many of the content removals by Facebook were challenged by users.
- Restored Content: How much content was put back on the platform as a result of appeals or for other reasons.
The new CSER includes data on one new content category: suicide and self-injury. In addition, as highlighted in a company blog post, the CSER includes expanded data on terrorist propaganda. This means that the data in the report no longer pertains to only al Qaeda, ISIS, and their affiliates. Rather, Facebook states that the report now includes data on “all terrorist organizations.” While this expansion may provide a more accurate account of terrorist propaganda on the platform, it is troubling that the CSER does not provide any indication of which groups they have added. Another change is that, for the first time, the report includes prevalence data for the regulated goods category of violations—which includes illicit firearms and drug sales. This is a good start, but going forward, we urge Facebook to publish data under these metrics for all categories of content violations on its platform.
Most notably, this edition of the CSER includes data on Instagram for the first time. This is something that OTI has pushed for in the past, as it provides transparency on how Facebook moderates content across its various products. Currently, the CSER includes data on four categories of Instagram content: child nudity and child sexual exploitation, regulated goods, suicide and self-injury, and terrorist propaganda. This version of the CSER only includes data on prevalence, content actioned, and proactive rate for Instagram. According to Facebook, this is because the company only launched an appeals function on Instagram during Q2 of 2019. The company has stated that it will include data on the two additional metrics of appealed and restored content in the future. In addition to making this pledge, Facebook should publish data under these metrics for all categories of content. Further, as Facebook embarks on new initiatives related to content moderation, such as its independent Oversight Board for Content Decisions, we strongly urge the platform to ensure that transparency reporting on content takedowns is built in as a key accountability mechanism from the outset (you can read consultation comments that OTI submitted on the Board here as well as our thoughts on the need for greater transparency around the Board here and here).
As we have previously outlined, the metrics that Facebook discloses in its CSER provide unique insight into how the platform moderates content. However, the company has yet to disclose a number of key data points that are vital for understanding the overall scope and scale of the company’s content moderation efforts. These data points are also necessary to compare Facebook’s content moderation efforts across its products, as well as across the industry. The missing data points include:
- A single, combined number that highlights the prevalence of guideline-violating content across all categories of content, including categories of content that may not be currently reported on separately.
- A single, combined number that highlights how much rule-violating content was taken down across all categories of content.
- How many users had their content removed and accounts temporarily or permanently suspended as a result of rule violations.
- How much content was flagged by users, and how much of this content was subsequently taken down.
- How much content different parties flagged (e.g. trusted flaggers, government Internet Referral Units, Facebook’s automated tools, etc.).
OTI has previously discussed the important role these metrics play in providing meaningful transparency around content moderation in our Transparency Reporting Toolkit focused on Content Takedowns, which we released last fall. In addition, we have urged platforms to provide high-level data points, as well as a meaningful notice procedure and a robust appeals process based on the recommendations set forth in the Santa Clara Principles on Transparency and Accountability in Content Moderation, which we drafted with a coalition of organizations, advocates, and academics that support the right to free expression. On the one-year anniversary of the Santa Clara Principles, we conducted an assessment of how YouTube, Twitter, and Facebook have implemented the Santa Clara Principles recommendations thus far.
Ranking Digital Rights, an affiliate program at New America, has also pressed companies such as Facebook for greater transparency and accountability around its content moderation efforts. In May, it released the 2019 Corporate Accountability Index, which ranks 24 of the world’s most powerful telecommunications, internet, and mobile companies on their commitments and policies affecting users’ freedom of expression and privacy. This latest index found that while Facebook has made some progress toward providing greater transparency and accountability around its content moderation efforts, it needs to do more, particularly related to its appeals process.
The latest CSER demonstrates progress in this regard, although it would be improved by including more meaningful and granular data points. Further, in order for the CSER to be a truly meaningful mechanism for providing transparency, it must provide a more comprehensive overview of the platform’s moderation efforts across all categories of content including from each of its products and services. Given that Facebook is a gatekeeper of a significant portion of online speech today, it is vital that the company take the necessary steps to provide greater transparency and accountability around its content moderation efforts.