January Digital Matters

1/31 - Exploring the critical need for responsible AI (and overall tech) governance.
Blog Post
Shutterstock
Jan. 31, 2024

Kicking off 2024, this month’s Digital Matters—our monthly round-up of news, research, events, and notable uses of tech—previews the year ahead in digital public infrastructure (DPI) and tech governance. Many of DPI’s biggest proponents see AI as a key driving force with the power to supercharge innovation and economic development, particularly in the realms of healthcare and education. Yet without effective governance to ensure the responsible development and deployment of AI and other exciting new technologies, it will be difficult to ensure the benefits of these technological breakthroughs outweigh the risks. Advocates for responsible innovation are pushing for urgent collective action among the public, private, and non-profit sectors to ensure innovation is human-centered and implemented in the public interest. Plus, AI generated deepfake robo-calls impersonating the President of the United States just won’t improve people’s trust in the government.

In 2024, tech’s impact on our political, economic, and social systems and processes will undoubtedly be debated globally. From disinformation and elections, to exciting breakthroughs in healthcare technologies, to the role of AI in hiring, the January edition of Digital Matters explores the many intersections between tech innovation, ethics, politics, human rights, and sustainable development. We also take a look at the companies driving tech development, many of which are embroiled in highly consequential legal battles that may define the future of tech. A closer examination of the economic incentives at play and the forces working in the wings says a lot about what to expect from “Big Tech” in the coming year, and how policymakers can work to steer development toward a more inclusive digital ecosystem.

What’s next in the Digital Public Infrastructure space?

The emerging field of digital public infrastructure (DPI) is ever-changing, as governments worldwide explore new and inclusive ways to harness tech as a tool to strengthen the provision of services. Quick refresher: akin to how physical infrastructure facilitates public access to basic goods such as electricity, water, and transportation, DPI leverages digital solutions and systems to improve public access to identity verification, data sharing, communications, digital payments and transactions.

Unpacking the Concept of Digital Public Infrastructure and Its Importance for Global Development, Center for Strategic and International Studies (Dec 23, 2023)

Romina Bandura, Madeleine McLean, and Sarosh Sultan provide a helpful overview of how government investment in DPI can spur digital transformation and sustainable economic development. The authors highlight several key examples of DPI in action, such as the government of Thailand’s use of the PromptPay platform to facilitate quick cash transfers to citizens during the Covid-19 pandemic. They also discuss some of the primary challenges facing the field, including the need to ensure digital solutions that manage personal data and deal with sensitive areas like identity and financial transactions are secure, interoperable, inclusive, and trustworthy.

Defending the Year of Democracy: What It Will Take to Protect 2024’s 80-Plus Elections From Hostile Actors by Kat Duffy and Katie Harbath, Foreign Affairs (Jan 4, 2024)

This year, more than half the world’s population will head to the polls, in what many are calling the biggest election year in the world’s history. The results of many of these elections will profoundly affect the future of tech policy globally. At the same time, the very technologies at the heart of global debates about digital governance will shape the outcomes of many of these democratic contests. As Kat Duffy and Katie Harbath argue this month in Foreign Affairs, this year’s election cycle will be a critical test case for how technology and democracy intersect. With democratic institutions around the world already under immense stress, the challenges posed by generative AI and AI-fueled disinformation, foreign influence operations, and rising polarization will only compound that pressure. To combat these growing threats, governments will need to step up digital literacy campaigns, while tech platforms will need to increase – not draw down – resources and efforts to build robust election integrity and content moderation teams.

2023 OECD Digital Government Index, OECD, (Jan 30, 2024)

The OECD launched the 2023 OECD Digital Government Index (DGI), its yearly assessment of the global digital transformation in the public sector, covering data from 38 countries (the United States isn’t included in the index because there was no data available). The study benchmarks the advancements made by governments in creating adaptable governance structures, reliable digital public infrastructure, and harnessing emerging technologies like AI. Significantly, it underscores the crucial role of digital governance in ensuring sustainable, human-centered transformations, especially highlighted during the COVID-19 pandemic's challenges. It has become clear that accelerating the digitalization of the public sector does not automatically lead to better outcomes, and more transformative and sustainable changes. To increase the effectiveness and efficiency of the public sector, governments need to become more flexible and future-oriented to capture the benefits of digital transformation while mitigating its potential risks.

The State of the Digital Public Goods Ecosystem 2023, Digital Public Goods Alliance (Dec 14, 2023)

Over the last year, digital public goods (DPG) gained ground, finding a prominent place in global dialogues, including the 78th UN General Assembly and India’s G20 Presidency. This report underscores the pivotal role of DPGs in establishing safe, inclusive, and interoperable digital public infrastructure, while navigating the complexities of digital sovereignty. It maps the growth in the DPG ecosystem and explores how DPGs can help address urgent global challenges including climate change and information pollution, while being a force for good for emerging tech, like AI.

State's cyber bureau has ‘raised the U.S. profile on cyber globally,’ watchdog says by Edward Graham, Nextgov/FCW (Jan 12, 2024)

The rise of DPI and online government services also calls for a robust strategy for addressing new threats in the digital space. To address this challenge, many governments have begun elevating the importance of cybersecurity domestically and in their diplomatic relations with other states. The State Department’s Bureau of Cyberspace and Digital Policy (CDP) was created in April 2022 to do just that. According to a Government Accountability Office report released this month, since its creation, CDP has been highly effective in its efforts to “counter threats to the U.S. digital ecosystem and reinforce global norms of responsible state behavior.” As the digital world faces new and urgent threats, many of which are fueled by the rise of accessible AI tools, engaging our allies and a multilateral set of stakeholders on cyber issues is more important than ever.

Event: PIT-UN at the SIDGE Symposium (Jan 11-12, 2024)

Earlier this month, New America’s Public Interest Technology University Network (PIT-UN) partnered with the Center for Social Impact, Development, and Global Engagement (SIDGE Center) for a multi-day symposium over Martin Luther King, Jr. Day Weekend as part of the NorcalMLK Foundation’s commitment to advancing social justice and equity through innovative means. The symposium brought together scholars, data scientists, technologists, and thought leaders from PIT-UN and partner organizations to address critical questions surrounding the responsible use of data, ethics in AI, and the potential to harness technology for the betterment of marginalized communities.

Should there be normative limits on the applications of AI?

As the private and public sector continue to explore use cases for AI , many argue that, just because we can apply technical solutions to overcome a variety of human problems doesn’t mean we should. New evidence of AI bias and discrimination–as well as tech advances that imperil personal privacy in medical contexts, the workplace, and in law enforcement–has added a greater sense of urgency to longstanding concerns about the degree of AI involvement in high-stakes decision-making.

New Book: The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now by Hilke Schellmann (Jan 2024)

From reading resumes to analyzing job interviews, to monitoring workers’ productivity, AI has become an integral part of workforce management for many companies seeking to cut costs associated with HR and recruiting teams. Yet as investigative reporter Hilke Schellmann reveals in her new book, “The Algorithm,” these AI-enabled tools are increasingly operating with minimal human oversight and influencing critical decisions about who gets interviewed, hired, promoted, and fired. While some companies argue AI actually makes employment decisions fairer, there is an emerging pattern of discrimination on the part of AI tools, with women, disabled people, and people of color bearing the brunt. Schellmann argues the outsized role AI plays in the workplace is one of the “most pressing civil rights issues of our time.” We agree: until we can understand how these algorithms make decisions, and correct the biases that currently run rampant through these models, a high degree of human oversight must be in place when making sensitive and context-dependent decisions surrounding human workers.

How the Federal Government Can Rein In A.I. in Law Enforcement by Joy Buolamwini and Barry Friedman, The New York Times (Jan 2, 2024)

Raising awareness about the myriad civil rights issues that come with AI use in sensitive sectors is only the first step to solving the problem. As Joy Buolamwini and Barry Friedman argue, government intervention can go a long way in constraining how, when and where these technologies are applied. In response to several high-profile cases of bias and discrimination at the hands of AI-powered law enforcement tools, such as facial recognition technology and predictive policing tools, the Office of Management and Budget announced new guidance and recommendations for federal use of AI tools, with a particular emphasis on law enforcement contexts. While this is an important step forward, the loopholes created by the OMB’s guidance may still leave many vulnerable to harm. Closer government oversight in the real world will be key to reining in abuse.

Advances in Mind-Decoding Technologies Raise Hopes (and Worries) by Fletcher Reveley, Undark Magazine (Jan 3, 2024)

New advances in neurotechnology have made it possible for people with Parkinson’s, paralysis or other medical conditions that affect the voice to communicate again with the world around them. Yet these advances also raise ethical concerns: as neurotechnology reaches new heights, the innate privacy of our own minds becomes less of a given. It’s not difficult to imagine how these technologies could be abused and misused by governments, police forces, even in the classroom. Enter the “neurorights” movement, an international wave of advocacy surrounding mental privacy. As Fletcher Reveley unpacks in a long read in Undark Magazine, neurorights advocates argue governance and regulation of technologies like brain-computer interfaces and others that directly access what’s going on in our minds should be framed as a human rights issue. As with many other emerging technologies, the dual-use nature of brain-decoding technology calls for a close examination of the ethics of their use, and clear boundaries on what can and should not be done with them.

Will 2024 be the year for strengthening tech governance efforts?

2023 was a year of big breakthroughs in the world of AI and other digital technologies. As the growing power of these technologies — and the companies that created them — becomes increasingly clear, governments, civil society actors, and researchers worldwide have begun to study their effects on society, and what kind of governance should be in place to mitigate risks and harms. From new laws governing online safety in the UK and EU, to a slate of antitrust lawsuits against Google and Meta, government efforts to set guardrails on big tech have ramped up. How these legislative and legal battles shake out this year will define what the relationship between the industry, government, and society will look like in the future.

What’s next for AI regulation in 2024? by Tate Ryan-Mosley, Melissa Heikkilä, and Zeyi Yang, MIT Technology Review (Jan 5, 2024)

The end of 2023 saw a flurry of government efforts to put forth new proposals on how to regulate and govern AI, with divergences in approach between democratic and authoritarian states. This year will see many of those efforts crystalize, as MIT Technology Review’s Tate Ryan-Mosley and Melissa Heikkilä argue. In the U.S., federal agencies are beginning to grapple with the directives in Biden’s October 2023 Executive Order on AI. The U.S. approach to tech governance has historically been largely friendly to industry, with Biden’s directive focusing on setting standards and best practices for federal agencies to follow, rather than enacting binding restrictions on the private sector. Meanwhile, in the EU, lawmakers came to an agreement on the highly anticipated EU AI Act late last year – the final bill is likely to begin taking effect soon. China may also be working on comprehensive AI legislation of its own, on top of regulations requiring AI companies to register foundation models with the government, and rules for algorithmic recommendation services already in force. As Ryan-Mosley and Heikkilä argue, how tech companies respond to new regulatory regimes will be a key litmus test for the future of AI; many may double down on domestic development to avoid the complications of navigating different regulatory jurisdictions.

Tech’s AI Hangover Might Just Be Getting Started by Dan Gallagher, The Wall Street Journal (Jan 3, 2024)

Amid pending government regulations, tech companies are also grappling with business challenges closer to home. While the generative AI boom that began with the release of Chat-GPT in 2022 spurred a massive wave of investment in the industry, companies are struggling to turn AI hype into profitable, marketable products. This year, we’re likely to see tech companies pilot different applications of AI in consumer products, from personal AI assistants, to photo editing software, to language translation tools. Yet developing a “killer app” for generative AI may take time: as Dan Gallagher writes in the Wall Street Journal, “it might take a while for tech’s expensive chatbots to prove they aren’t just talk.” While AI will undoubtedly reshape the workforce in many ways, good and bad, one application of AI with positive potential is its ability to empower workers in technical fields like cybersecurity. As some argue, AI can remove some of the cognitive load and burnout typically associated with the field of cybersecurity. Through a process of “upskilling,” AI can free up cybersecurity practitioners to focus on building the skills that are uniquely human, like critical thinking, creativity, and human-to-human interaction, leading to a more resilient and effective workforce.

The Unbearably High Cost of Cutting Trust & Safety Corners by Matt Motyl and Glenn Ellingson, Tech Policy Press (Jan 4, 2024)

Last year saw large numbers of layoffs at tech giants faced with economic uncertainty. Many large social media companies cut content moderation teams. While these layoffs may have saved platforms money in the short term, as Glenn Ellingson argues in Tech Policy Press, they will certainly pay the costs of cuts in the long term. In 2023, online hate speech and harmful content exploded online, particularly on platforms like X (formerly known as Twitter), that have relaxed content rules and cut back heavily on trust and safety teams in the past year. As Ellingson points out, these cuts may impact advertiser’s willingness to work with these companies, as well as users themselves; as users report increasingly negative experiences, they may become more inclined to stop using social media platforms altogether. Not only does prioritizing harm reduction make economic sense for tech companies, it also promotes a healthy digital ecosystem that serves the public good.

Please consider sharing this post. If you have ideas or links you think we should know about, you can reach us at DIGI@newamerica.org or @DIGI_NewAmerica.