Announcing OTI’s New Transparency Reporting Toolkit Focused on Content Takedowns
Blog Post
Shutterstock
Oct. 26, 2018
The internet has become an increasingly important tool for free expression. As online platforms and messaging services proliferate, they have become the most effective means for people to get their messages out and to connect with one another. However, the companies who provide these services, namely internet platforms and telecommunications companies, have also begun policing who uses their services and how their services are used. As a result, these companies have assumed the role of gatekeepers of online speech, often removing or blocking user content for various legal and policy reasons. However, there is a significant lack of transparency around how these companies manage free expression online, which has sparked concerns about these platforms’ expanding role as private censors.
Over the past decade, as detailed in this timeline, many technology companies have begun issuing transparency reports that disclose information related to government requests for user data. More recently, in response to pressure from policymakers and civil society, some companies have also begun disclosing qualitative and quantitative data on how they respond to the various legal requests they receive to take down content, and on the content they take down based on their own terms of service. The disclosure of this data is a vital first step toward enabling the public to hold these companies accountable in their roles as arbiters of speech, and also hold accountable the governments and private parties that seek to censor online speech.
However, there is currently little standardization in terms of how companies are reporting on content takedown issues, particularly when it comes to what specific data points are being reported, as well as the granularity of the data presented. Meanwhile, many companies still are not reporting at all, or reporting incomplete data about only a few types of takedowns. As a result, it is difficult to draw meaningful conclusions about how content takedown practices are impacting online free expression, or for the public to identify where they think the companies are doing too much—or not enough—to address content issues on their platforms and networks. There is a clear need for industry-wide guidance and best practices for reporting on these issues.
Yesterday, to address this gap, New America’s Open Technology Institute (OTI) released its new Transparency Reporting Toolkit, focused on content takedown reporting by internet platforms and telecommunications companies across the globe. The first Transparency Reporting Toolkit, a joint project with Harvard University’s Berkman Klein Center for Internet & Society, was released in 2016. Its aim was to make it easier for companies to create and improve their transparency reports around government demands for user data, and to make transparency reporting more consistent, easier to understand, and more effective. The new edition of the Toolkit is intended to do the same for the field of content takedown reporting, highlighting which companies are reporting on content takedowns and highlighting their best practices to show how this type of reporting can be improved, standardized, and expanded.
To develop this Toolkit, OTI conducted a survey of 70 global internet and telecommunications companies that issue transparency reports. We found that 35 of them were reporting on content takedowns, with various reports ranging across six categories: government and other legal demands, copyright requests, trademark requests, demands for network shutdowns and service interruptions, Right to be Forgotten delisting requests, and Community Guidelines-based content takedowns. However, few if any companies have been reporting on all of these categories, and the extent and form of the reporting has varied widely.
Our study found that the majority of the 35 companies reported on takedowns that were based on government and other legal demands for content takedowns. Most online content hosts also reported on content takedowns based on alleged copyright infringement, although only a handful similarly reported on trademark-related requests for removal. Reporting on network shutdowns and service interruptions was generally more common among telecommunications companies than platforms. However, even where companies have reported such information, the data has often been limited due to legal restrictions in the applicable countries that hindered data disclosure.
Our survey also revealed that reporting on requests under the European Right to be Forgotten (RTBF) is currently very rare, and much more data is needed to better understand how that right is being enforced (or misused). Only Microsoft and Google have begun reporting on such requests since rulings in the European Union in 2014, and in Russia in 2016, determined that citizens could request that search engines delist search results tied to their names if the information is “inadequate, irrelevant or excessive.” Given that this right was also included in the EU’s General Data Protection Regulation which came into effect in May 2018, and further considering that some companies that don’t offer search services have begun receiving RTBF demands even though it doesn’t apply to them, we hope and expect that there will be greater reporting on these requests in the future.
In addition, our survey found that before this year, reporting on takedowns by companies based on violation of their own terms of service or content guidelines was very uncommon and inconsistent. However, in April 2018, Google issued its first Community Guidelines enforcement report regarding YouTube takedowns. Shortly after, in May 2018, Facebook released its first Community Standards enforcement report, and also provided greater insight into how the platforms’ Community Standards are enforced. These two reports are currently the most comprehensive transparency reports around Community Guidelines-based content takedowns, and we predict that they will be the first movers in a broader trend toward more reporting on such takedowns. However, as highlighted in the Toolkit, there is still room for a lot of improvement and experimentation in this emerging area of reporting, and therefore there is a need for more efforts to help establish clear best practices in this category.
That is why OTI and a coalition of organizations, advocates, and academic experts who support the right to free expression online recently released the Santa Clara Principles on Transparency and Accountability Around Online Content Moderation. That document outlines minimum standards that internet platforms must meet in order to provide adequate transparency and accountability around their efforts to take down user-generated content or suspend accounts that violate their rules—including providing detailed numbers (i.e., transparency reports) around their takedown activity, robust notice to affected users, and a clear path for users to appeal takedowns and account suspensions.
Such calls for greater transparency, accountability, and redressability around content takedowns just got louder yesterday, when several dozen civil and human rights groups endorsed similar principles through a new campaign. That effort is pressing internet companies to “Change The Terms” and use their terms of service to better address hateful activities online, while also being much more transparent and more accountable to users in regard to how they apply those terms. OTI did not join that effort because of concerns about its breadth, especially the suggestion that not just user-generated content platforms but also infrastructure-level companies like domain registrars and web hosts should aggressively police content. On that issue, we agree with our colleagues at the Electronic Frontier Foundation and academics like Daphne Keller at Stanford’s Center for Internet and Society, that normalizing private censorship at those layers of infrastructure would set a dangerous precedent. However, we share this new coalition’s concern around hateful activities online and offline, strongly agree with the group’s calls for greater accountability around content takedowns, and appreciate their substantive contribution to a complex and critically important issue.
The demand for transparency is clearly growing, as this new effort shows. And generally, companies have been responsive to that demand, as the number and breadth of transparency reports is growing too. However, our survey found that while there are a growing number of global internet and telecommunications companies issuing transparency reports that touch on many categories of content takedowns, there is still significant variation in that reporting which hinders its usefulness. The best practices outlined in the Toolkit can serve as a guide for companies to overcome those differences, to improve and standardize their transparency reporting on content takedowns, or to issue a report for the first time.
In particular, the best practices in our Toolkit highlight that platforms operating internationally need to expand their reporting so that it is clear how they handle requests in every country they operate in. In addition, companies that operate multiple platforms and services should break down requests they receive according to the products, in order to highlight which services (and therefore which specific types of communications and content) are being targeted the most. Furthermore, companies need to provide specific numbers regarding the impact that requests are having on user accounts and individual items of content. Only with such data can we truly understand the broader impact on online freedom of expression that comes from content takedowns, whether those that are demanded by governments, requested by other parties, or initiated based on a company’s own terms of service. We hope that this new Toolkit adds to that understanding, and serves as a basis for continued dialogue around best practices for transparency reporting on content takedowns.