SCOTUS Rules in Favor of Platforms — What’s Next for Platform Accountability?
Blog Post
Shutterstock.com / Bloomicon
June 8, 2023
Section 230 of the Communications Decency Act, one of the laws most critical to the fabric of the Internet as we know it, survived a key test at the Supreme Court last month. In Gonzalez v. Google and Twitter v. Taamneh, the Supreme Court sidestepped the opportunity to weigh in on Section 230, although the Court may well revisit it in future cases. At the moment, all eyes are back on Congress. The Open Technology Institute (OTI) is focused on how legislative proposals for amending Section 230 would affect the nature of our online speech environments. We believe that the most productive way to think about this public debate is to zoom out from the text of the law and to focus on the broader context of online content moderation. Instead of proposing reforms to Section 230, Congress should use widely-accepted policy principles on content moderation and algorithmic accountability to drive progress on the challenges we face in balancing free expression, security, and competition online.
What is Section 230?
Taking a step back, Section 230 of the Communications Decency Act (47 U.S.C. § 230(c)) ensures that online platforms (YouTube, Wikipedia, your local ska forum, etc.) are not liable for content that they host but do not create. Section 230 immunizes “interactive computer services” from liability when they make “good-faith decisions” about restricting access to objectionable content on their services, with certain exceptions including copyright, federal criminal law, and sex trafficking laws. Section 230 thus gives any online platform, large or small, the freedom to make choices about how to deal with user content without worrying about a flood of litigation. The Open Technology Institute (OTI) believes that intermediary liability protections are critical to effective content moderation that balances the important objectives of free expression and security online. We also believe that companies hosting content must (1) adopt clear, accessible content policies, (2) provide transparency about the scope and scale of their content moderation efforts, (3) be transparent about the rationales for their content moderation rules, and (4) consistently enforce their policies and implementing rules.
Unpacking Gonzalez and Taamneh
Gonzalez and Taamneh were closely linked cases because both sets of plaintiffs alleged that companies’ algorithms recommended content that led their relatives to die in terrorist attacks. These algorithmic recommendations, the Gonzalez plaintiffs argued, made the companies more than passive content hosts in a way that was not protected by Section 230. The plaintiffs in Taamneh sued Twitter under the Anti-Terrorism Act (not Section 230), claiming that Facebook, Google, and Twitter aided and abetted ISIS by knowingly allowing ISIS to use their platforms and letting ISIS supporters use their platforms and recommendation algorithms “as tools for recruiting, fundraising, and spreading propaganda.” But these cases presented weak facts—connections between recommendation algorithms and real-world harms far too attenuated for the court to take up the question of whether Section 230 would shield Google in Gonzalez. Because the Court held in Taamneh that “the complaint in that case fails to state a claim,” it concluded that the Gonzalez family also failed to state a claim and never considered the question of Section 230 liability. (For those who want to delve deeper into the legal details, SCOTUS blog is a good place to begin.)
In Anupam Chander’s view, the Court’s decisions have set a high bar for future cases seeking to pierce Section 230’s liability shield. “As long as it’s … not an intentional act to do harm to the world and promote harmful content, these companies aren’t going to be liable under this Court’s views.” Justice Ketanji Brown Jackson’s concurrence in Gonzalez emphasizes that “[o]ther cases presenting different allegations and different records may lead to different conclusions.” Examples include trade association NetChoice’s challenges to Texas and Florida laws, cases that implicate both the First Amendment and Section 230. While we wait for months to see what happens in the courts, Congress continues to focus on Section 230 as a vehicle for enacting major reforms in platform accountability.
Broadening the Platform Accountability Lens Beyond Section 230
Continued Congressional focus on platform accountability in content moderation and algorithmic ranking is welcome. But we are not persuaded that Section 230 is the right vehicle for advancing legislative and policy improvements on these fronts. The debate around Section 230 encompasses many issues and reflects diverse stakeholders’ desires to address a wide range of harms. The core question is how to design healthy online speech environments that balance the equities of free expression and safety and promote healthy competition among content-hosting services.
Amending Section 230 cannot achieve everything we might want to see within the broad category of platform accountability, and might well have unintended consequences. Section 230 is the target of so many reform proposals related to governing platforms in part because it is one of the few laws on the books that directly deals with platforms and in part because it is seen as the reason that platform companies are not held accountable for harms emanating from their products or services. But thinking of Section 230 as the primary vehicle to hold platforms more accountable for content moderation could have a host of troubling consequences. Creating even small holes in the liability shield for “bad” algorithmic recommendations or ranking is difficult, if not impossible, since companies cannot technically distinguish ranking decisions deemed bad from those deemed to be beneficial. Weakening or narrowing Section 230 protections may well cause content-hosting platforms to heavily moderate content in an effort to limit liability, thus chilling free expression in the process.
What Does Improved Platform Accountability Look Like?
Meaningful accountability goes beyond reforming or removing Section 230 protections, and gets at larger concerns about the platform ecosystem. Principles like the Santa Clara Principles on Transparency and Accountability in Content Moderation, which OTI helped draft, are at the heart of our advocacy work with both platforms and government. Our work on algorithmic accountability offers a resource for policymakers. And there are legislative proposals that focus on meaningful accountability. Last year, OTI endorsed the Algorithmic Accountability Act, which requires impact assessments of automated decision systems and augmented critical decision processes. Its requirements would represent an important step in holding platforms accountable while still protecting free expression online. Bills like this would produce meaningful progress on key issues at the heart of debates about Section 230 and algorithmic ranking without performing major surgery on a delicate statute in ways that could profoundly affect the nature of online speech, safety, and competition.
Looking Ahead
The Court’s decision in Gonzalez to avoid the question of Section 230 liability gives Congress and civil society welcome breathing space to think more carefully about Section 230, content moderation, and algorithmic accountability. Incautious legislative or judicial experiments with Section 230’s liability regime could take a sledgehammer to core aspects of Internet functionality, chill free expression, and disadvantage small companies. To contribute to public discussion, OTI is hosting an event on June 20th that explores the importance of Section 230 protections to the public interest. The event will feature a keynote introduction from and a fireside chat with Senator Ron Wyden, followed by a panel discussion. We hope you will join us in thinking through the future of this foundational law.