Why does ‘protecting kids and families’ always come down to censorship?

Blog Post
Feb. 23, 2024

As a young person growing up with the social internet, the world has always felt infinitely big. When my high school French teacher connected us with pen pals, I already had acquaintances from around the world on Tumblr. When I came out as queer in 2019, I dug up the seven-year-old emails between myself and a South African writer my age who gently pushed me on my objectively homophobic perspective on whether or not a fictional character could be bisexual. (A few years later, a new book revealed that the character is gay.)

I firmly believe that my exposure to wildly different ideas and lifestyles made me better. However, I know I was lucky to avoid the worst of what the internet can be for young people. In the 91 messages I exchanged with a 24-year-old British man on Fanfiction.Net at the age of 14, nothing inappropriate ever happened. The internet is forever, so I just went back and reread them. We talked about each other’s writing, depictions of mental illness in fiction more generally, surface-level cultural differences between the U.S. and the U.K., and, once, how much we both hated American Airlines.

The safety of young people online can’t be left up to luck, though. In possibly the least productive Congressional session in modern history, this is one of the few things lawmakers have agreed on. Advocacy organizations and think tanks—including our colleagues at the Open Technology Institute—agree: We’re experiencing a youth mental health crisis, and online platforms are part of the problem.

Of course, Congress’s approach to solving this problem is to go overboard via an internet censorship bill. These can be popular on both sides of the aisle. 

On the surface, the Kids Online Safety Act (KOSA) makes sense. It requires covered platforms—primarily those that host user-generated content and/or facilitate public conversations between users—to take “reasonable measures” to prevent users under 17 from experiencing a variety of harms, including but not limited to mental health disorders, addiction to the platforms themselves, violence, bullying, and sexual exploitation and abuse. The issue is that the five members of the Federal Trade Commission (FTC) would decide what types of content create these risks and how a causal relationship is defined. FTC members are nominated by the president and confirmed by the Senate, so their politics fluctuate, and civil rights and digital privacy advocates worry that a politicized FTC could use the bill to censor content they disagree with ideologically. Additionally, a loophole related to user interface and experience design would allow state attorneys general to do the same if they were willing to be creative about it. The details of this are a bit technical for this piece, but the Electronic Frontier Foundation explains it well.

Members of Congress have stated that they’re hoping for ideologically-motivated restrictions. In an interview, co-sponsor Sen. Marsha Blackburn (R-Tenn.) explained that one of her reasons for supporting the bill is that it could “protect minor children from the transgender [sic] in our culture.” 

Generally, teenagers shouldn’t strike up online friendships with people a decade older than them, but I had conversations about depictions of mental illness in fiction with many people, and they shaped my education and career. At the age of 15, I started my first formal research project in this area, and I continued exploring it in undergrad, in graduate school, and now, here at the Better Life Lab. I’ve been excited to contribute to our Entertainment Focused-Narrative and Culture Change Practice, and my book chapter on dysfunctional family dynamics in The Umbrella Academy is finally headed to print as of last week.

It would be easy to argue that teenagers shouldn’t be engaging with content depicting mental illness, sometimes severe, on the internet. Parents and legislators understandably want to protect young people from challenging topics, but the issue is that teenagers are dealing with these things either way. When I worked with teenage writers, I published countless op-eds on mental illness, trauma, family problems, sexual assault, discrimination, and other heavy topics. These pieces were rarely led with stats or news items; they focused primarily on personal experiences. In high school, I had friends who joked about suicide constantly, and that was normalized, but looking back now, I wonder how much of that was a joke. Magnet school culture was such that while several of us were researching mental illness, talking about personal experiences was less of a thing. 

We know it’s valuable for people to talk about their mental health. We also know that it’s valuable for people to see their own identities and circumstances on screen. This doesn’t just apply to mental illness—referring to Sen. Blackburn’s comment above, blocking young people from information, stories, and conversations about people like them is dangerous. We can directly link that to suicide rates—teens who have learned about LGBTQ people and issues in school are 23 percent less likely to have attempted suicide in the last year. 

In both culture and policy, we must address the mental health crisis among youth and the internet’s role in that. However, passing legislation that allows a small group of politicians to arbitrarily decide what’s bad for young people and compel platforms to block that content isn’t the answer. Tech policy experts have recommended several alternatives, most prominently, comprehensive privacy regulations that would protect minors and adults. Disconnecting youth from essential resources and social support would have disastrous results, disproportionately for those most marginalized.