Twitter’s Latest Transparency Report Sheds New Light on Content Moderation, But More Work Remains To Be Done
Blog Post
In Green / Shutterstock.com
Dec. 19, 2018
On December 12, Twitter released its 13th biannual Transparency Report, which for the first time included data and insights on how the company enforces its Twitter Rules and the impact these content policies have had on Twitter users. This move is the latest in a growing trend of internet platforms releasing transparency reports about how much content they’ve taken down based on terms of service violations. Twitter is only the third company to issue such a report, following on the heels of similar reports from YouTube and Facebook this spring.
This new report comes as OTI and its allies continue to push internet platforms for more transparency and accountability around their roles as content gatekeepers. In May, OTI and a coalition of advocates and experts released the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of guiding principles on the minimum levels of transparency and accountability that platforms should provide with regard to their content takedown practices, including specific data we’d like to see in transparency reports. In October, we also released our latest Transparency Reporting Toolkit, surveying the field of transparency reporting on content takedowns and identifying best practices for improving transparency reporting in this area.
So, how does Twitter’s new report stack up?
As we pressed for in the Santa Clara Principles, Twitter’s report lays out the number of Twitter accounts that were reported for violating the rules, and the number of accounts that they took action on by temporarily or permanently suspending them, across a range of six different categories of rule violations (abuse, child sexual exploitation, hateful conduct, private information, sensitive media, and violent threats). This data provides a solid picture of how many individual Twitter users are actually impacted by the Twitter Rules, and enables comparison of how much content is reported versus how much Twitter acted on. (Generally, it looks like Twitter users vastly over-report, or report based on the wrong rule. For example, although 2,814,940 users were reported for abusive tweets, only 248,629 were actioned.) Importantly, Twitter also does a good job of defining exactly how it is counting these items, explaining how it seeks to avoid double-counting and otherwise clarifying exactly what the data does and does not include. Many transparency reports fail to provide these basic but critical explanations.
However, there are three key questions that we hope Twitter’s future reports will also answer:
How many tweets were taken down, or otherwise actioned? Although Twitter does a good job of tracking how many accounts were reported and actioned, it doesn’t report any data on how many individual tweets were reported and taken down or otherwise actioned. We have a good idea of how many users were affected, but no information about how much speech was affected. This is the reverse of Facebook’s report, where one of our main critiques was that it only gave numbers about the pieces of content affected rather than actual users; YouTube’s report has the same problem. All three reports should be reporting both on accounts and content affected.
What are the numbers for all categories of forbidden content? As with Facebook’s report, Twitter’s report doesn’t yet cover all takedowns based on violations of the platform’s content rules. Instead, it only covers the six major categories listed above. We hope and expect future reports will be more comprehensive by including not only numbers for all individual categories of rule violations, but also aggregate numbers for all rules-based takedowns across the entire platform.
How many of the takedowns were due to automated systems? The Santa Clara Principles call for greater transparency around takedowns resulting from automated tools, as these are becoming increasingly important for identifying and removing content online, and raise unique questions around accuracy, accountability, and due process. However, Twitter fails to provide any meaningful data about automated takedowns in its report. According to a recent blog post, Twitter has seen a significant decrease in the number of content takedown requests received through its reporting flow. This is largely due to improvements in its internal proprietary tools which have enabled the company to automatically identify and take down rules-violating content before it is flagged. This is particularly true for content categories such as terror content, 91 percent of which was flagged by Twitter’s internal tools. However, despite the extensive amount of content that is flagged and removed by these tools, the company’s transparency report offers no new information about how or how often those tools are used. Facebook and YouTube’s reports demonstrated similar shortcomings. This is particularly concerning considering the extent to which automated tools are being deployed by online platforms to shape and regulate user speech.
Although there are definitely some improvements we’d like to see in future iterations, the release of Twitter’s report—the third comprehensive report on terms of service-based content takedowns issued by a major tech company—is an important development that has solidified the issuance of such reports as an industry best practice, much like Twitter helped establish the best practice of transparency reporting on government data requests and copyright takedowns earlier in the decade. Going forward, we hope to see Twitter, YouTube and Facebook further expand and standardize their reporting in line with the best practices we’ve outlined, and hope to see even more industry players issue reports of their own. 2018 was the year that content moderation reporting became a best practice; hopefully it will become a widespread standard practice in 2019!