February Digital Matters
2/29 - A new era for DPI, double-edged sword of innovation, and how tech is impacting elections
Blog Post
Shutterstock
Feb. 29, 2024
Happy leap day, enjoy your bonus 24 hours in 2024! This month’s Digital Matters — our monthly round-up of news, research, events, and notable uses of tech — dives into a new and exciting era for Digital Public Infrastructure (DPI). Many in the international development space predict 2024 will be a year of action, as the building phase of many DPI projects and plans begins in earnest. The lessons learned from two early-movers, Ukraine and South Korea, can be instructive. Yet while digital technology often provides new opportunities and solutions to tackling tough challenges, when used the wrong way, these tools can have unintended, negative impacts, sometimes exacerbating the very problems they aim to solve. We take a look at the debates surrounding the rise of Artificial General Intelligence, online safety in the U.S., and the use of data in development work to understand the advantages and pitfalls of technological advancement.
In that vein, we also dig into how digital technologies are being used to manipulate information environments in the context of national elections. Given the sheer number of elections taking place around the world this year, 2024 will be a test case for how well democratic institutions can stand up to attacks. Many unknowns remain, not least how exactly to measure the impact of nefarious uses of technology on voting decisions. We conclude by proposing one possible solution for mitigating the harms of manipulated information environments – building trusted mechanisms for members of the public to verify the communications purporting to be from government or political actors.
What opportunities does Digital Public Infrastructure offer?
There’s a lot to learn from countries in various stages of digital transformation. We are interested in the potential for scaling or piloting tools responding to real-time digital government challenges. For example, the urgent needs of both Ukraine's users and government in the wake of Russia’s 2022 invasion spurred new features and uses of Diia, Ukraine’s e-government platform. In a different context, South Korea’s use of digital solutions also offers a fresh perspective for how different regions of the world are imagining the future of digital government.
What we can expect for digital public infrastructure in 2024, by Benjamin Bertelsen and Ritul Gaur, World Economic Forum (Feb 13, 2024)
In 2023, much of the global discussion on DPI revolved around defining what exactly the concept entails, and securing investment and funding commitments for building out digital transformations globally. As the World Economic Forum predicts, in 2024, we can expect progress realizing these commitments, with support from international development organizations. Putting DPI strategies into action will require a concerted effort and close cooperation between development orgs, governments, and civil society actors. Yet, as Benjamin Bertelsen and Ritul Gaur explain, adoption is only the half of it; measuring the impact of these pilot projects will be key to spurring a wider proliferation of DPI and accelerating digital transformations worldwide.
Can Ukraine Transform Post-Crisis Property Compensation and Reconstruction? by Yuliya Panfil, Allison Price, Tim Robustelli and Alberto Rodríguez, New America (Feb 7, 2024)
One of the many advantages of DPI is the ability to replicate successful projects across countries facing similar challenges. One such project, Ukraine’s eRecovery program, which allows Ukrainians whose homes have been damaged or destroyed by Russian aggression to apply for and receive compensation through the Ukrainian government’s Diia e-government platform, was the subject of a recently-released DIGI report co-authored with our colleagues in the Future of Property Rights program. The potential for Ukraine’s eRecovery program to help transform post-crisis property compensation and restitution globally is hard to overstate. The program’s success, despite ongoing hostilities, is based in large part on Ukraine’s prior investment in building robust DPI by way of Diia, which Digital Matters covered in December. Read New America’s full report, which provides recommendations for strengthening the eRecovery program, to learn more.
A Human Rights Approach to Ukraine's Rapid Digitalization, by Marti Flacks, Caitlin Chin-Rothmann, Lauren Burke, Julia Brock, and Iryna Tiasko, Center for Strategic and International Studies (Feb 14, 2024)
For another significant angle on Ukraine, read the Center for Strategic and International Studies’ report on how a post-war Ukraine can implement a human rights-centered approach to digital governance. As one of the world’s most digitally integrated societies, Ukraine has a unique opportunity to innovate for long-term prosperity. Yet it must also contend with security risks and accessibility issues, while meeting European Union standards on privacy and surveillance to advance the country’s EU membership bid. As the report’s authors argue, “In the end, whether digitalization is a force for good or ill in Ukrainian government will hinge on the transparency and accountability mechanisms that are built into these systems and the enforcement of laws to protect individuals from abuse.” Watch the report launch event here. Also on the note on tech news in Ukraine, the German Marshall Fund of the United States hosted a conversation with Ambassador Nathaniel C. Fick and CISA Director Jen Easterly to discuss global efforts to strengthen Ukraine’s capacity to detect and defend itself from Russian cyberattacks. You can watch the recording here.
Event: A Report Card for Korean Digital Leadership,The Carnegie Endowment for International Peace (Feb 29, 2024)
This month, the Carnegie Endowment for International Peace hosted a launch event for a new report diving into South Korea’s path to digital leadership. While much of the attention on digital transformation has been focused on developments in the United States, Europe, India and China, the South Korean government is a rising leader in the space, making big strides in setting standards and best practices for digital government. The lessons learned from Korea’s approach to digital transformation, which features a high degree of collaboration between the public and private sector, can inform global efforts to align government and business around an innovative DPI infrastructure.
How is governance a key to navigating the double-edged sword of innovation?
These days, it seems like every week brings new advances and concerns in the world of AI. We are also wrestling with changes to the way we seek and consume information online. While there are clear upsides to technological leaps forward, there is a human responsibility to ensure they are implemented in line with fundamental values. It’s important to keep in mind that while digital solutions can be a force for good, the technology itself is not inherently good or ethical. An examination of how these tools are being used in practice can guide policies aimed at making them more equitable, safe, and fair.
Can Democracy Survive Artificial General Intelligence? by Seth Lazar and Alex Pascal, Tech Policy Press (Feb 13, 2024)
While the massive advances in the field of AI over the last few years have been impressive, many policymakers and tech leaders have begun wondering about the flip side of this coin. Could even greater AI advancements bring about the as-yet illusive Artificial General Intelligence (AGI) that could match or even rival human intelligence, creativity, and ingenuity? And if so, what would that mean for democratic governance? As Seth Lazar and Alex Pascal argue in Tech Policy Press this month, “the advent of AGI could finish democracy once and for all.” Given AGI’s potential to shape economic, political, and societal processes, Lazar and Pascal question whether the pursuit of such a high-level and massively impactful technology should be allowed to continue unregulated, or even at all. Ensuring cutting-edge technology aligns with democratic values doesn’t happen automatically – it is a never-ending process that must constantly be adapted and updated to reflect technological advancements.
How to Apply an Intersectional and IDEA Lens for Social Impact Organizations, Data.org (Feb, 2024)
The rise of data and other digital technologies have created incredible opportunities for social impact organizations to expand the impact, scope and efficacy of their work. Yet while data can be used to advance social good, organizational practices for collecting, using, and storing data must also be examined. Irresponsible practices by these well-intentioned organizations can sometimes amplify the very inequalities and injustices they are working against. To mitigate these harmful effects, Data.org has created a toolkit for social impact organizations to “embed the principles of intersectionality and inclusion, diversity, equity, and access in organizations’ data practices.” As Data.org notes, decolonizing data will take time, but social impact organizations can begin leading the process by setting the example and standard for ethical data practices.
Ensuring Digital Services Act Audits Deliver on Their Promise by Jason Pielemeier, Ramsha Jahangir, Hilary Ross (Feb 19, 2024)
The inaugural DSA audits for platforms with over 45 million EU users are pivotal for ensuring accountability but face challenges: a lack of standardized methodologies, lofty audit expectations, and concerns over auditor independence. The authors warn that without clear compliance measures, the DSA's vision for a safer, rights-respecting digital space may falter. They advocate for a multi-stakeholder dialogue to craft a coherent audit framework. This insight complements our discussion on the nuanced impact of digital innovation, slotting in neatly between the narratives of DPI and the evolving dynamics of digital democracy.
Kids Online Safety Act gains enough supporters to pass the Senate by Lauren Feiner, The Verge (Feb 15, 2024)
Growing global attention to the harm that social media sites and other online platforms cause to young people has led many countries to pass laws aimed at reining in how these platforms treat their most vulnerable users. The U.S. answer to this challenge, the Kids Online Safety Act, has quickly gained steam in Congress, having recently acquired enough support to pass the Senate. The legislation would create a “duty of care” owed by tech platforms to their younger users, taking aim at social media “design features” that keep young people hooked on the apps. Yet some fear the legislation could lead to online censorship and even suppress “lifesaving” content, and advocate instead for comprehensive data privacy legislation that protects all online users. Such an approach might well be better suited to truly target the heart of a problem that affects many vulnerable groups, kids included. As we continue to see amendments to the bill, we hope the momentum of the hearings can yield comprehensive privacy regulation.
EVENT: State of the Net Conference Series, Internet Education Foundation (Feb, 12, 2024)
This years’ internet policy conference hosted discussions on the intersection of technology and policy, particularly in the wake of President Biden's executive order on AI. Key highlights included a keynote address by Principal Deputy U.S. Chief Technology Officer Deidre Mulligan on leveraging AI for societal benefits while addressing ethical concerns, and a fireside chat featuring U.S. Federal Chief Information Officer Clare Martorana and Administrator of the United States Digital Service Mina Hsiang, moderated by Nancy Scola. They explored AI's implementation within federal initiatives, emphasizing the balance between innovation and responsible governance, reflecting the conference's role as a pivotal forum for shaping the future of internet policy.
Will manipulated information environments impact elections and what can be done?
2024 will be the biggest election year in the world’s history, with millions of people heading to the polls across states ranging from democratic to authoritarian. As election cycles in India, the United States, and other countries begin to heat up, government-sponsored efforts to derail free and fair elections and disinformation campaigns led by non-state actors have only increased. To make matters worse, it’s nearly impossible to measure exactly how these attacks will shift voters’ preferences and impact the public’s perceptions of these high-stakes democratic contests. While much remains to be seen, what we do know is that creating trusted channels of communication between the government and the public can mitigate some of the dangers posed by manipulated information environments.
This Website Tracked Hate Crimes in India. Then the Government Took It Offline, by Vittoria Elliott, Wired (Feb 12, 2024)
India’s election season is poised to take off in a matter of weeks. Yet new government efforts to quiet dissent and control the information environment raise questions about Prime Minister Narendra Modi’s commitment to free and fair elections. As Wired reports, last month the X account of Hindutva Watch, a local human rights watchdog, was blocked by order of the Indian Ministry of Electronics and Information Technology. Hindutva Watch, which tracks religiously motivated hate-crimes perpetrated by followers of the far-right Hindu-nationalist Bharatiya Janata Party (BJP), has nearly 80,000 followers on X, and serves as a critical information source for human rights advocates. Cases like this one will be critical tests for how effectively governments are able to manipulate information flows before major elections, with grave implications for democratic institutions.
F.C.C. Bans A.I.-Generated Robocalls, by Cecilia Kang, The New York Times (Feb 8, 2024)
Last month, the New Hampshire presidential primary was rocked by an attempt at voter suppression. On the eve of the vote, Democratic voters received robocalls that sounded like President Biden, telling them to stay away from the polls and “save” their votes for the general election. In response, the Federal Communications Commission (F.C.C.) voted earlier this month to outlaw the use of AI-generated voices in unsolicited robocalls. In the words of F.C.C. chairwoman Jessica Rosenworcel, “It seems like something from the far-off future, but it is already here.” The F.C.C.’s ruling is a step in the right direction, giving election officials the tools to go after bad actors. Yet each case of an AI-fueled misinformation campaign risks further damaging public trust in the integrity of the U.S. democratic election system.
Byting Back: Trusting Government Content in the Age of AI by Rowan Humphries and Alberto Rodríguez, New America (Feb 22, 2024)
While AI-fueled misinformation campaigns in connection to the U.S. presidential election are making headlines right now, AI tools are also being used in new and more powerful ways to defraud Americans. Fraudsters impersonating government actors are not a new problem – in fact, government imposter scams are some of the most common types of attacks out there. Yet the rise of AI has rendered these tactics more convincing, and given scammers the ability to scale up their operations. To mitigate this threat, the authors argue the U.S. government needs to develop a trusted method Americans can use to verify the identity of the political actor or government representative contacting them. The authors take a look at a few possibilities, arguing that “secure and trustworthy channels of communication between government and the public can go a long way to enhancing confidence in public institutions and strengthening democratic governance in the long term.”
Please consider sharing this post. If you have ideas or links you think we should know about, you can reach us at DIGI@newamerica.org or @DIGI_NewAmerica.