To Combat Online Harms, Congress Needs to Shift the Focus Away from Section 230
Blog Post
April 11, 2024
Today, the Communications and Technology Subcommittee of the House Energy and Commerce Committee is holding a bipartisan hearing on Section 230 and its future. This is not the first time Members of Congress have leaned into the reform or repeal of Section 230. The impulse to continuously examine this piece of legislation is understandable, and the scholarship of today’s panel experts is important, but Congress is going down the wrong path by continuing to spotlight Section 230 in the context of addressing online harms.
The internet has been a transformative positive force, but—like most sets of tools in human hands—it also has been used to perpetuate or exacerbate grave harms like discrimination, harassment, bias, and misinformation. Going online has become essential to participating in modern life, and these harms are enhanced when you’re part of a group that is usually marginalized, both IRL and on the internet. This reality should be the foundation of any conversation on how to promote equitable access to digital technology and its benefits. How we achieve that outcome—well—that’s where many diverge.
Reforming or repealing Section 230 of the Communications Decency Act of 1996 is often touted as a silver bullet that will cure a laundry list of online harms. That claim is suspect and diverts focus from necessary legislative and policy reforms that deliver comprehensive privacy protection, algorithmic accountability, and greater transparency about content moderation activities.
What Is Section 230, Why the Repeal/Modification Focus, and What Are the Perils of That Approach?
Section 230 provides limited immunity to any service provider for taking action (or not) on third-party content. This piece of legislation says that a platform isn’t responsible for the content posted by its users and can’t be sued if it decides to keep that content up or take that content down, with several important exceptions. Section 230 does not bar platforms’ liability for violations of the law, including federal criminal law, intellectual property law, and laws related to sex trafficking.
Section 230 is an easy target when it comes to figuring out how to address online harms, and its modification is usually touted as the solution to all (or most) of these harms because it is a well-known federal law that broadly covers the online ecosystem. The country’s techlash has brought forth a perspective that Big Tech has run rampant primarily because of the leeway that Section 230 provided, not because of these platforms’ business models, their late-in-the-game additions of trust and safety, or because of how the modern internet amplifies human nature’s worst impulses.
Removing Section 230 protections will certainly affect Big Tech. Depending on how the reform or repeal of Section 230 is crafted, large platforms will either offer levels of censorship previously unseen or will turn their platforms into a giant free-for-all without moderating anything they don’t legally need to (imagine large platforms becoming like 4chan). Additionally, since Section 230 is not solely restricted to Big Tech, its reform or repeal will affect smaller companies, too. Beyond not having the resources to hire the lawyers necessary for what will very likely be a whole slew of frivolous lawsuits, such platforms would need to follow the routes of the big players and overly restrict speech or keep horrible material that’s posted. Because Section 230 is not just for the commercial web, spaces like Wikipedia and the Internet Archives and local libraries’ online platforms would also lose their shield and even potentially shut down.
Look no further than the debacle of FOSTA-SESTA to understand what lies in store for platforms after Section 230 reform. Beyond creating new liability (both criminal and civil) for a broad range of actors, it also created a new exception to Section 230, where the platforms’ users' speech might be seen as promoting or facilitating prostitution or as assisting, supporting, or facilitating sex trafficking. This led to, as the Center for Democracy and Technology put it, “chilling constitutionally protected speech and prompting online platforms to shut down users’ political advocacy and suppress communications having nothing to do with sex trafficking for fear of liability.” Even worse, the law itself wasn’t even needed to shut down what the government deemed as bad actors, including platforms like Backpage.
Alternatives to Addressing Online Harms
Congress wants to find solutions to online harms, and that is a laudable goal. However, significantly altering Section 230 is not the right vehicle. It’s akin to driving 20 mph above the speed limit while going the wrong way on a completely different highway.
If we care about these socio-technical problems, we need to combat them at their roots. Concerns with privacy, monopoly, algorithms, bias, and fairness won’t vanish simply because these companies can get sued for moderation decisions. Much like with the encryption debate, simply because a shield is used by bad actors and good ones alike doesn’t mean removing the shield will ultimately rein in the bad actors, but it definitely means the good actors will suffer.
To be clear, the status quo is not tenable and should not be defended. There are numerous heartbreaking stories of people losing their lives, livelihoods, peace and sanity, or even family members because of the bad actors on the internet. We need legitimate, thoughtful legislative action to change that status quo. Fighting for Big Tech to get sued more freely for their moderation action is a fine choice of a retaliatory measure for those looking to exact a pound of flesh, but it is not sound policymaking.
We applaud Congress for wanting to “do something” about all the harms that have emanated from the web. But there is no reason to get bogged down in trying to find the least bad way to reform Section 230 or, at best, to spend time arguing for changes on the margin in order to escape First Amendment challenges. We can deal with these harms through other vehicles, like bills that have already been introduced that have nothing to do with Section 230. This is why I’m heartened that the same committee holding today’s hearing is planning a bipartisan hearing on a broad range of tech legislation on April 17.
OTI has endorsed legislation like the Algorithmic Accountability Act, slated to be taken up during the hearing, which requires companies that use automated decision-making to conduct impact assessments and provide meaningful transparency about the results of these assessments. OTI has also endorsed the American Data Privacy and Protection Act, which has sailed out of committee in the previous Congress and is the basis for the newly released American Privacy Rights Act, which will also be discussed next week. Other bills tackling terms of service, surveillance advertising, and kids’ privacy and safety are also in the mix.
Congress should certainly act, but it should do so in a deliberate, meaningful way, not in order to score points at home from voters who think that Big Tech is finally getting its comeuppance via Section 230 reform. Our leaders should act to tackle harms, limit potential side effects, and help transform the internet and innovation into an overwhelmingly positive force for all, especially for groups that currently don't have a lot of say!