App Stores vs. Platforms: Who Should Be Verifying Internet Users’ Ages?

Big Tech’s Debate Highlights Age Verification Risks to User Privacy and Data—But There Is a Safer Way Forward
Blog Post
Young people with backpacks use phones.
Feb. 26, 2025

To make online environments safer for youth, legislators across the country keep returning to one problematic solution—age verification requirements. Viewing these requirements as inevitable, tech companies are advocating for legislation that shifts the responsibility and cost of implementation to other players in the digital space. Recently, Meta, X, and Snap sent a letter to South Dakota legislators praising their state bill requiring app store operators, like Google and Apple, to verify user ages. Meanwhile, Apple and others have pushed back on such suggestions, calling for platforms and online operators to verify their own users.

The debate over who should be responsible—and ultimately liable—for age verification highlights a key point often overlooked by legislators: age verification mandates pose risks to everyone’s safety and privacy online.

While online operators use a variety of methods to estimate or ascertain a user’s age, true age verification is currently only accomplished by requiring hard identifiers, such as a user’s government-issued ID or biometrics. In practice, this means users must often disclose their identity beyond their age.

Requiring every user to present sensitive documents that disclose their identity to access age-restricted information not only puts their privacy and data at risk but also raises free speech concerns. OTI, alongside many other privacy advocates, has stressed these risks. Yet, these mandates continue to gain support, even amid ongoing constitutional challenges in the courts. Given the current focus on requiring technical solutions for youth safety both in the U.S. and abroad, online age verification requirements will likely become a reality in the near future.

As policymakers push online safety legislation forward, they should resist the call to require app stores to perform age verification; it’s not the “most sensible and effective” solution, as Meta’s letter claims. Instead, age verification—if it must be done—is most effective at the service level, where it would apply verification requirements solely to legally age-restricted content—that is, only when it’s actually required. As such, websites and apps like Meta, X, and Snap, which may host age-restricted content, are in the best position to determine when age verification is truly necessary.

Yet, while service-level age verification is the most effective path forward, implementing it requires the right technical infrastructure and protocols to ensure it can be done in a private, secure, and flexible manner.

Where This Verification Happens Matters

Each verification point comes with its tradeoffs for cost, efficacy, and risks.

Unlike service-level verification, app-store-based verification would leave large gaps in coverage. Most glaringly, it will not address websites or web access to applications. This means users can easily circumvent age verification requirements by simply choosing not to use apps to access content otherwise available through a web browser.

In addition to app stores and platforms, other organizations have suggested that age verification should occur at the device level—as seen in one Idaho bill. Verification at the device or operating system level could open the door to a more invasive view—and more control—of people’s non-online activity and data. At the same time, device-based verification could fail to account for different users of shared devices.

Most importantly, the point at which verification happens dictates which entity is responsible for age verification and determines who will collect, process, store, and secure personal information used to verify a user’s age. Handling massive amounts of sensitive data required for age verification is a substantial undertaking that can leave data vulnerable to breaches, theft, and government surveillance requests.

These factors may motivate some social media companies, who are seemingly trying to shift this responsibility and liability onto other tech players. It’s likely that social media companies want to avoid the cost of implementing age verification at the service level and the potential negative impact of implementation on user trust.

How Can We Move Toward Privacy-Preserving Age Verification?

Contrary to what Meta, X, and Snap have argued, age verification at the service level offers more robust coverage than app store implementation. The fact that, currently, there is no standard age verification model presents an opportunity to design one that prioritizes user privacy and security. Building a model age verification ecosystem around shared protocols and standards is the best path toward effective and responsible age verification.

In practice, such an ecosystem would employ privacy-enhancing technologies like encryption and zero-knowledge proofs. This tech can be used to verify age in a way that does not share a person’s identity with a website or platform while also concealing a user’s online activity from the age-verifying entity. At the same time, integrating trusted third parties—who would actually do the verification—will allow some flexibility for different providers’ needs and give users the choice to determine who holds the data that’s used to verify their age.

To achieve this approach, there must be more coordination and alignment among technologists, industry, civil society, and age-verifying entities on how responsible age verification can and should work. As legislators push ahead on age verification, it is essential to put forth a technical blueprint for responsible age verification that places user safety and privacy at the forefront.

Related Topics
Cybersecurity Platform Accountability Data Privacy