There's No One-Size-Fits-All Model for Digital Governance
Weekly Article
nopporn / Shutterstock.com
Oct. 18, 2018
It’s past time we regulate how the government collects, stores, and uses citizen data—and a recent ruling by India’s Supreme Court, on the country’s digital ID system, may provide us with a blueprint for how to think through that process.
Aadhaar is the world’s largest biometric database, having collected the fingerprints and iris scans of more than a billion people. Undoubtedly, the program is doing lots of good, for instance, by helping to deliver things like banking and financial services to India’s poor and low-income citizens. (This includes streamlining government payments through direct transfers to beneficiaries.) However, the data Aadhaar collects can also be used in a range of capacities—many of which extend beyond public-service delivery.
Just a few weeks ago, the Indian Supreme Court struck down parts of Aadhaar that allowed data-sharing with private companies. While the court found that Aadhaar was constitutionally valid and didn’t fundamentally violate citizens’ rights, it did rule to curb the use of digital ID for purposes outside public-service delivery. The reason, at least in part, is that though industry has often touted the market benefits of employing digital ID to spur entrepreneurship and innovation, there are still unanswered policy questions about how, precisely, the public sector should regulate the use of this information. Perhaps reflecting the spirit of the times—most obviously, the E.U. General Data Protection Regulation (GDPR) adopted earlier this year—the court found that the national government needed to further clarify citizens’ rights via a new data-protection law.
Indeed, the world appears to be heading toward implementing more, not less, digital ID technology on the backend. Given that—and given the fact that commercial sectors are increasingly vying to obtain and use citizen data—it’s in our best interest to figure out how to regulate this information both securely and efficiently.
There may not be a one-size-fits-all model, but there are some key questions we ought to be asking as more and more governments wrestle with the tension between privacy rights and service delivery.
But first: a few caveats. Even though digital-protection wrangling isn’t isolated to India, national context is key. Estonia’s “e-Estonia” program, for instance, is often lauded as a leader in digital government, with 99 percent of government services available online and 30 percent of the population using i-voting. The New Yorker’s Nathaniel Heller dubbed it “the most ambitious project in technological statecraft today, for it includes all members of the government, and alters citizens’ daily lives.” Everything from education, justice, and healthcare to banking, taxes, and voting has been linked across one platform. However, size, scale, and homogeneity matter: Estonia is a country of 1.3 million people, half the country is forest land, and it has little history of intra-national ethnic division (more on that later).
Denmark is another fairly homogenous country, one with uniquely high levels of citizen trust. Perhaps unsurprisingly, then, Denmark is also a leader in e-government services. Its five-year Digital Strategy has led to the creation of a digital ID system—“NemID” —which enables people to access a wide range of surveys, make bank transfers, and even acquire and set up private-sector services, such as making hair appointments.
But while Estonia and Denmark both illustrate how digital ID and government services can make society more efficient, they’re not always perfectly instructive for a large, heterogeneous country like the United States. The U.S. federal system, in particular, makes data-sharing and efficient data management across levels of government especially challenging. On top of that, the primacy of states’ rights and local governance, which are wired into the very foundation of American institutions, will likely make the move to e-government that much harder.
That said, it’s still crucial to learn what we can as we attempt to create a more generalizable human rights framework for e-services and digital ID programs. Indeed, while there may not be a one-size-fits-all model, there are some key questions we ought to be asking as more and more governments wrestle with the tension between privacy rights and service delivery.
The first question is internal to countries. Given countries’ unique structures and technical and institutional capacities, what changes are necessary to move toward more digital governance? One example comes from India, where the court ruling itself exposes how difficult—at times even contradictory—regulating data protection can be. Consider the fact that while the court’s decision prevents corporate entities from demanding an Aadhar card in exchange for goods or services, an Aadhaar number is required for filing income returns and applying for a permanent account number. Moreover, Aadhaar is a requirement for at least 22 welfare programs in India, even as several banks have denied services to customers with an Aadhaar card. This issue shines a light on the need to bring existing requirements in line with proposed regulations. So, to move this to the U.S., any system would have to bridge the variety of privacy regulations in each state with the identity requirements of a federal aid program.
Second, what are takeaways from current global-governance examples—GDPR, Aadhaar, e-Estonia—and how can they be applied elsewhere to ensure that civil society has a transparent and legitimate process for engagement? Above all, for any model to work and have a lasting impact, it must engage civil society, industry, and academia, with each sector bringing its own strengths to the conversation. Put it like this: Nothing should happen in silos, because the questions surrounding digital ID have implications across multiple interest groups. Together, these groups can help advance new research, share learning opportunities, and generate a broader network of knowledge to equip public leaders and the general public to regulate this emerging space.
And third, is it possible to genuinely empower civil society so that traditionally marginalized communities get a seat at the table before standards are set, and ensure broader democratic norms are followed? China offers a sterling example of what not to do. Its social credit system ranks the population on a host of issues: posting fake news, buying too many video games, bad driving. The implications of the resulting score are only just beginning to unfold, but they may include banning people from flying or train travel, throttling Internet speed, preventing people from attending certain schools, and even publicly labeling people “bad citizens.” What’s truly horrifying, however, is that this system specifically targets marginalized communities, like the Uyghur and Tibetans, reinforcing China’s already deeply corrosive social hierarchy.
It’s too soon to say for certain how to regulate citizen data. That’s partly because there’s no agreed-upon set of values or technical considerations to guide this work. Even so, it’s clear that the first movers in this rapidly growing space, including India and the European Union, will play a large role in shaping the global conversation on whatever principles and guidelines we develop. There’s no one-size-fits-all model to show us the way forward, true, but that’s beside the much larger point: that there are examples of what’s working and what’s not—if we care to learn from them.