The EU’s Digital Services Act Makes a Positive Step Towards Transparency and Accountability, But Also Raises Some Serious Questions
Blog Post
Shutterstock.com / symbiot
Jan. 21, 2021
On December 15, the European Commission released drafts of the highly anticipated Digital Services Act (DSA) and Digital Markets Act (DMA). The DSA outlines a set of new obligations that internet platforms must meet when they remove illegal and harmful content from their services. It also places numerous obligations on platforms to ensure they are providing adequate transparency and accountability around their content moderation and curation operations. The DMA, meanwhile, seeks to address unfair competition. Together, the two proposals represent a major regulatory step that responds to the pressure tech companies are currently facing to provide consumers with access to a range of safe digital products and services. Companies that fail to adhere to the proposed EU content policies could face fines of up to 6% of their global revenues. Additionally, companies that continuously violate the requirements could find their platforms temporarily banned in the EU.
Although the recently released version of the DSA is just a draft, and could undergo numerous revisions before it is enacted, the proposal has wide-ranging implications for freedom of expression online. In particular, it includes some highly problematic provisions regarding notice and takedown. The proposed notice and takedown framework would essentially allow private companies to determine what content is illegal. And companies would be responsible for reviewing these decisions through their own internal processes, rather than through external, established judicial systems. This goes against the principles of rule of law, and very concerningly outsources censorship powers to private companies. Internet platforms hold a vast amount of power as both economic and speech gatekeepers, and the DSA and DMA are efforts to address this growing issue. However, its notice and takedown provision goes against this spirit. We strongly encourage the European Commission to reconsider this provision, and to emphasize that the judiciary must be the sole arbiter when it comes to the legality of speech.
The DSA does, however, include some positive components of rights-respecting online content regulation. These include the “Good Samaritan” principle, which affords intermediaries liability protections when they voluntarily moderate potentially illegal user-generated content. In addition, the DSA includes numerous positive transparency and accountability provisions, which are in line with recommendations OTI has made in our past work and in the DSA consultation comments we previously submitted. DSA provisions relevant to OTI’s recommendations include:
- Providers of intermediary services must publish at least one transparency report every year, outlining the scope and scale of their content moderation efforts. OTI has pushed companies to publish regular and granular transparency reports, which shed light on how the companies police speech on their services. In addition, the DSA outlines that online platforms should disclose how automated tools are used during the content moderation process, and provide indicators of accuracy in these reports. This is integral to understanding where these tools work, and how and where they fall short. While this transparency reporting requirement is a good first step, as currently written the DSA’s provision has some limitations and could be strengthened. Namely, this stipulation requires reporting on the average time it took for a company to act after it received an order related to illegal content from a Member State. Metrics like this that emphasize speed could be used as a cudgel for future regulation that presses companies to act quickly, rather than more accurately. We encourage future iterations of the DSA to consider what meaningful transparency around content moderation looks like.
- Providers of intermediary services must outline their content moderation policies and practices in their Terms of Service, including to what extent algorithmic decision-making and human review are used. OTI has pushed platforms to explain how they use automated tools during the content moderation process, and to what extent and where humans are kept in the loop. This is especially important considering that research has found that automated tools are often ineffective when it comes to moderating categories of content that require subjective and contextual decision-making.
- Providers of hosting services must provide adequate notice to individuals who have had their content or accounts moderated. Providers must also give individuals access to an appeals process. This is in line with the recommendations OTI and other free expression advocates and organizations have outlined in the Santa Clara Principles on Transparency and Accountability around Content Moderation. The Santa Clara Principles outline minimum standards that platforms should meet in order to provide adequate transparency and accountability around their content moderation efforts.
- Platforms to provide transparency around their advertising operations, including the parameters that are used to determine who an advertisement is delivered to. In addition, very large online platforms that display advertising must establish an ad library, which includes data on the nature, targeting, and delivery of advertisements on their service. As we have outlined in our report Special Delivery: How Internet Platforms Use Artificial Intelligence to Target and Deliver Ads, there is a serious lack of transparency around the algorithmic tools that are used to target and deliver ads, and this can often result in discriminatory delivery of critical ads including housing, employment, and credit ads. Transparency around these processes is therefore vital.
- Very large online platforms must conduct risk assessments in order to identify threats to fundamental rights, including freedom of expression and information. These companies must also implement any necessary mitigation measures. These risk assessments should particularly account for systemic risks related to content moderation, recommender, and ad targeting and delivery systems. These provisions are in line with the recommendations we made in our report series Holding Platforms Accountable: Online Speech in the Age of Algorithms.
- Very large online platforms that use recommendation systems must outline in their Terms of Service the primary parameters these systems use. Companies must also explain how and to what extent users can control these parameters to adjust their platform experience. The DSA also outlines that users should have access to at least one option to modify their experience that is not based on profiling. This aligns with the recommendations we have put out in our report Why Am I Seeing This? How Video and E-Commerce Platforms Use Recommendation Systems to Shape User Experiences. It is important to note, however, that platforms use hundreds of signals in their recommendation engines, and these factors are constantly changing. As the DSA is refined, the European Commission should strongly consider what meaningful transparency around the building blocks of recommendation systems look like.
The first draft of the DSA includes some positive provisions that will prompt greater transparency and accountability around how internet platforms police speech and use algorithmic decision-making. However, it also includes some misguided provisions that, despite their intent, will not result in meaningful transparency. Further, the DSA also includes some concerning requirements that would undermine the rule of law and strengthen platforms’ roles as speech gatekeepers. We strongly encourage European lawmakers to ensure that future iterations of the DSA emphasize meaningful transparency provisions. Especially because the DSA will likely serve as the foundation for further regulation in these issue areas going forward, and it is vital that the provisions protect international human rights and the rule of law.