Youth Deserve a Thoughtful, Holistic Approach to Online Safety

Brief
A child attempts to access the internet while safety shields circle them.
pathdoc/Shutterstock
Sept. 23, 2024

Overview

In recent years, there has been considerable attention paid to youth online safety and well-being. Legislators have proposed different technical interventions to improve the internet’s safety for young people, including age-appropriate design codes, increasing parental controls, and requiring age verification. Some legislators have suggested banning young people from social media altogether. As the Open Technology Institute (OTI) has underscored in our report on age verification, youth online safety requires a holistic approach, as it’s unclear if any of the commonly proposed technical interventions can fully or directly address the challenges that young people face online.

Many of the proposed technical interventions present feasibility issues as well as constitutional and privacy concerns for users. For example, OTI, along with other civil society organizations and legislators, have raised concerns about the censorship risks of the Kids Online Safety Act (KOSA), especially for youth in marginalized communities. In August 2024, a federal appeals court upheld a partial block of California’s Age-Appropriate Design Code due to similar fears over censorship and speech regulation. And the age verification mandates appearing across state legislatures raise data privacy and security concerns and can result in over-censoring access to content for all users. OTI recently joined other civil society organizations in filing an amicus brief in support of an appeal challenging the constitutionality of Texas’s new age verification law, which will be heard by the Supreme Court.

Online spaces should be safer for youth. However, quick tech fixes that can cause more harm than good are inappropriate solutions. Instead, youth deserve a thoughtful, holistic approach to improving their online experiences and overall well-being. Recent reports from the U.S. Surgeon General and the White House Task Force for Kids Online Health and Safety discuss both the benefits and risks of social media use and highlight the need for more research to fully understand its impact on children and youth.

Improving young people’s mental health and overall well-being requires a nuanced approach that acknowledges and addresses the complex socioeconomic, community, and tech-based contributing factors. Addressing youths’ well-being also requires recognizing the vast breadth of social, developmental, and online contexts of the millions of children, teens, young adults, and families across the United States. While many stakeholders seek to better understand technology’s impact on youth mental health and development, common-sense policies that center user privacy, safety, and rights can help improve youth experiences and mitigate known online risks.

Policy Recommendations for Policymakers, Industry, and Civil Society

  1. Invest in multidisciplinary research to better understand the empirical effects of social media on youth well-being and investigate evidence-based solutions to improving youth experiences online.
  2. Pass comprehensive federal data privacy legislation to standardize basic online privacy protections for users across all states.
  3. Advance privacy-, security-, and safety-by-design practices to minimize risks to user data privacy and security.
  4. Implement features that allow greater user control and agency over individual online experiences.
  5. Require greater algorithmic transparency and accountability.

There’s No Quick (Tech) Fix for Online Safety

Growing concerns over social media’s impact on youth have prompted states, schools, and parents to try to limit the tech’s potential negative outcomes for young people. A 2023 Surgeon General report concluded that while social media benefits youth, it can also pose a meaningful risk to their mental health. Legislators have often sought to tackle these challenges through technical intervention. However, some of the most commonly advanced technical interventions have concerning implications for users and may cause more harm than good.

Age-Appropriate Design Codes and Features

Inspired by the U.K. Children Code, there has been a recent campaign to pass age-appropriate design codes (AADCs) at the state level. The campaign proposes 15 standards for company practices based on developmental stages, including privacy-by-design and by-default and required data impact protection assessments.

Despite the laws’ promising components, they face criticism for being overly broad and having larger implications for user access. As written, the AADCs charge online operators to prevent youth from “potentially harmful” material. However, without clear definitions of what that means, online operators may feel obligated to over-censor content, limiting access to protected speech and causing a chilling effect for all users. This may disproportionately impact marginalized communities and access to politicized content.

These concerns, in part, led to the blocking of California’s AADC in 2022 and were reaffirmed in a recent federal appellate ruling. Similar censorship concerns have surrounded the Kids Online Safety Act, which proposes a “duty of care” to implement design features to “prevent and mitigate” mental health disorders such as anxiety, depression, eating disorders, and substance use disorders. Such a broad mandate could have significant consequences for free speech online. Further, there is no clear way to determine what may influence the development of such disorders for every individual.

Parental Controls

Parental controls allow parents to filter content, set restrictions, monitor activity, enable permissions, and link their account with their child’s. However, these tools place a high burden on parents, who may not have the capacity, willingness, or digital skills to use them effectively. In fact, despite their availability, parents simply are not using these features.

Recent data shared by Discord and Snapchat shows that fewer than one percent of minors have parents who use monitoring tools. This aligns with a 2016 Pew study of parents of 13–17-year-olds that found parents are “relatively less likely to use technology-based tools to monitor, block or track their teen” than other measures.

In addition, these tools raise concerns for youth privacy and safety. Parental controls, when overly invasive and restrictive, can be particularly dangerous for youth who are already vulnerable, such as LGBTQ youth, those seeking access to or information about reproductive health care, or those experiencing child abuse and neglect at home.

Age Verification

Current age verification practices require users to provide a government-issued ID, credit card, or biometric data to verify their age. Such requirements can significantly challenge all users’ right to access content, as millions of Americans, particularly those in marginalized communities and under the age of 16, do not own a valid government-issued photo ID or hold a credit card.

Even when users have appropriate ID, without proper safeguards, the process of verifying user ages can endanger their data privacy and security. Previous efforts to implement similar age verification requirements have been ruled unconstitutional. In July 2024, however, the Supreme Court agreed to hear an appeal concerning Texas’s new law requiring age verification to access adult content. This ruling could impact future determinations of if and in which cases age verification is constitutional—OTI’s amicus brief to the court details the risks of such legislation.

The Root of the Problem: It’s More Than Tech

Online spaces should be safer for youth. The challenges online environments and activity can pose to youth safety and well-being are serious. Pressing concerns such as child sexual abuse material (CSAM), cyberbullying, and access to age-inappropriate content require further attention and action to mitigate. And policymakers and public health officials’ current focus on social media’s impact on mental health and development deserves further exploration to deepen our understanding of exactly how social media—and technology use—impact youth across different stages.

At the same time, it is necessary to recognize that technological solutions can create additional challenges, such as potentially censoring access to content or putting user data at risk. And, on their own, technological solutions cannot adequately or fully address the challenges that youth face, many of which are rooted in offline issues. For example, today’s generation of young people faces myriad challenges impacting their development and their outlook on life—only some of which may be amplified by online activity. Societal challenges, including increasing gun violence in communities, accelerating climate change, higher reports of sexual violence, increasing income inequality, and an ongoing loneliness epidemic, can all contribute to youth mental health outcomes and development. Simultaneously, the COVID-19 pandemic disrupted key events and life experiences while changing people’s relationship with technology. Society is still trying to understand the pandemic’s total impact on our collective physical and mental health, particularly for young people whose key developmental years were impacted.

While technological design can help mitigate some harms young people face online, it’s important to be realistic about the limitations of technology’s ability to address complex online safety issues comprehensively. Mental health and well-being are shaped by various complex socioeconomic, community, and tech-based factors. While online spaces can amplify challenges youth face offline, strategies to improve online spaces for youth cannot be addressed in isolation; rather, they must holistically consider all contributing risks and protective factors. Policies that promote common-sense protections can improve young people’s experiences online by advancing privacy, security, transparency, and user agency. At the same time, these policies can reduce the potential for bad practices that amplify real-world challenges online and encourage excessive or problematic internet use.

Recommendations: How to Better Protect Youth—and All Users—Online

Social media and other forms of online spaces can offer important benefits to youth, fostering a sense of connection and belonging as well as creating avenues for activism and advocacy. Rather than supporting policies that can exclude young people from online spaces, it’s important to recognize the value of these spaces and work toward improving young people’s experiences within them. There is a wide range of approaches, including investments in digital literacy and specific interventions to counter CSAM, that can improve online safety for youth. The following recommendations are not a comprehensive list of solutions to online safety challenges. Instead, they are foundational ways to center user privacy, safety, and rights that address some of the underlying concerns driving the technological interventions discussed above.

  1. Invest in multidisciplinary research to better understand the empirical effects of social media on youth mental health and investigate evidence-based solutions to improving youth experiences online. As underscored in recent reports from the U.S. Surgeon General and the White House Task Force on Kids Online Health and Safety, more research is needed to understand the full scope of technology’s impact on young people’s mental health. Such research should determine what type of technology use and exposure results in negative outcomes and how exactly it impacts youth across all developmental stages. Furthering our understanding—in the United States and globally—of technology’s effect on youth can better inform evidence-based solutions and recommendations for best online practices, design features, and usage. In addition, this research may highlight the need for policy interventions and initiatives to address social and community-based challenges, beyond the online environment, which contribute to concerning youth mental health trends.
  2. Pass comprehensive federal data privacy legislation to standardize basic online privacy protections for users across all states. Comprehensive federal data privacy measures, such as those proposed by earlier versions of the American Privacy Rights Act (APRA) and the American Data Privacy Protection Act of 2022 (ADPPA), can establish a baseline of protections for users across the United States. Such measures can create data minimization requirements, establish online civil rights protections, create universal opt-outs, and establish other privacy rights for users, such as their ability to view, export, or delete their data and stop its sale or transfer. Creating a federal baseline for user privacy protections will reduce practices that are data extractive and exploitative while offering young people and their families more control over how their data is collected and used.
  3. Advance privacy-, security-, and safety-by-design practices to minimize the risks to users’ data privacy and security. Implementing by-design practices requires centering user privacy, security, and safety at all stages of technology development and implementation to mitigate risks before they occur. These design practices require greater responsibility, transparency, and accountability from developers on protecting users and their data online. Such practices avoid placing the onus solely on individuals, particularly young people, to navigate associated risks.
  4. Implement features that allow greater user control and agency over individual online experiences. While privacy-, security-, and safety-by-design practices offer improved baseline protections for users, customizable technology features can also provide users greater control over their online experiences. Depending on personal preferences, certain design features can help users mitigate unwanted or problematic online experiences, such as choosing who can see and interact with their accounts; who can message them online; and what content, suggested or not, they see online. Together, young people and their families can alter settings to best suit their needs and preferences at various ages.
  5. Require greater algorithmic transparency and accountability. Algorithmic transparency measures, such as those proposed in the Algorithmic Accountability Act of 2023, can require platforms to provide meaningful transparency about the algorithms impacting a user’s online experience. Users could gain greater insights into promoter-suggested advertisements, content, and connections. In addition, requiring impact assessments about the bias and other effects of algorithms can mitigate risks to users and help users make informed choices about their online activity.

These recommendations collectively offer a clear starting point to improve experiences for all users, not just young people online. To more closely address discrete challenges, greater collaboration between stakeholders is needed. Tackling technology’s role in the youth well-being and negative outcomes requires greater exploration to differentiate what challenges can be addressed through technical design and features and what must be addressed through larger social initiatives. There is no easy or quick solution. Improving youth online safety requires a thoughtful, nuanced, and holistic approach. Rather than attempting to implement one-size-fits-all technical interventions that do not address root causes, policymakers, industry, and civil society should tackle known issues by advancing legislation and solutions that promote user agency and platform accountability in a rights- and privacy-respecting manner.

Related Topics
Platform Accountability Cybersecurity Data Privacy