Demystifying AI: AI and Elections
Blog Post
Lightspring / Shutterstock.com
Oct. 29, 2024
The Year of AI in Elections?
OTI’s “Demystifying AI” series breaks down what we really mean when we talk about Artificial Intelligence (AI). In order to understand the uses and potential impacts of AI—both generative and predictive—on society and individuals, a critical and holistic overview of what AI can and cannot do is necessary. While much of the public discourse hones in on generative AI in particular, the world needs to pay more attention to how predictive AI is used to predict future outcomes and further automate decision-making. This series provides concrete examples of both the promises and perils of AI so we can move beyond the hype around this technology and ensure that we are responsibly shaping how AI is used—instead of allowing AI to shape us.
By the end of 2024, more than 70 countries—home to almost half the world’s population—will have held national elections and referendums. While each of these countries has their own election vulnerabilities, global concern about the potential for artificial intelligence (AI) to influence the electoral process has grown with the advancement and proliferation of the technology. AI’s evolving role in elections can be seen across the world from Argentina and Pakistan to Slovakia and Taiwan—yet it’s unclear if AI use has resulted in the widespread election disruption that many feared.
Is AI Good or Bad for Elections? It’s Complicated.
The following section provides an overview of how predictive and generative AI systems are being used—and misused—in campaign efforts and electoral processes around the globe.
Predictive AI
Predictive AI is a standard part of voter outreach today, helping campaigns hone the ways they identify, reach, and engage voters. Predictive AI allows campaigns to use available voter and consumer data to “microtarget” voters with personalized ads and fundraising requests. For smaller campaigns, such tools can level the playing field to expand their reach and engage new audiences. Beyond the campaign trail, predictive AI can also help facilitate electoral management by automating administrative tasks such updating voter rolls, verifying voters, and informing resource allocation. In addition, predictive AI tools allow real-time data analysis, which can help officials identify and respond to threats or anomalies during the election-monitoring process.
However, predictive AI also carries the usual privacy concerns associated with underlying data collection and use required to power these systems. As the 2016 Cambridge Analytica scandal demonstrates, access to such data can be misused to spread targeted polarizing and misleading information. In addition, without the necessary safeguards and transparency, predictive AI tools can be used to further voter suppression. As with any predictive AI system, election-related AI can amplify discrimination based on biased data or data training. Without sufficient human oversight and auditing, over-reliance on AI systems may result in inappropriate outcomes and overlooked challenges, leaving impacted individuals without recourse.
Even predictive AI tools deployed well in advance of election season can have an impact at the polls. For instance, an AI tool used to identify gerrymandering can help address systemic injustices, but could also be misused for partisan aims to dilute voter impact—especially as AI tools are deployed to help forecast and map population changes.
Generative AI
One of the largest concerns dominating public conversation is the use of generative AI to produce campaign related materials and sow election mis- and disinformation. Campaigns are using AI to generate political ads and create deepfakes of political and celebrity figures. The technology’s misuse has resulted in misrepresentations of political opponents and falsified endorsements, while also disproportionately targeting voters of color and fueling foreign influence operations. Yet, not all generative AI use is malicious—for example, candidates in India used deepfakes to simulate themselves speaking in various dialects to better reach voters across regions. However, even if deployed with the best intentions, generative AI tools can have negative effects. AI-powered chatbots and search result summaries, for example, may result in misinformation, heightening challenges to voters with disabilities. In light of these concerns, some campaigns are shying away from a full roll out of AI tools, while major tech companies have pledged to limit election-related misuse of their products.
As with predictive AI, generative AI tools can be used beneficially and perniciously. While mis- and disinformation has been misleading, influencing, and polarizing voters long before the advent of AI, this evolving technology can amplify associated risks. To more effectively inform voters and combat disinformation, more research is needed to better understand the disproportionate impact of generative AI on disinformation and specific elections results.
A Closer Look:
2024 U.S. Elections
Generative AI has taken center stage in the U.S. elections. In January 2024, thousands of New Hampshire voters were targeted with a fake robocall mimicking President Joe Biden discouraging voters from going to the polls. The audio deepfake resulted in both criminal charges and a $6 million penalty—the first AI-related fine the Federal Communications Commission (FCC) has levied. Similar to international trends, generative AI in the U.S. elections has been used to attack candidates, create political satire, generate campaign materials, and mislead voters about celebrity endorsements.
In response to this AI misuse, regulators and legislators are taking action to mitigate the impact of AI on U.S. elections. The FCC ruled AI robocalls illegal and proposed rules requiring the disclosure of AI use in political ads. Congress introduced legislation regarding AI use in political ads and campaigns, and state legislatures in at least 19 states have passed similar transparency and disclosure laws since 2019.
Ahead of Election Day, the Cybersecurity and Infrastructure Security Agency (CISA) and the U.S. Election Assistance Commission released materials to help election officials prepare for potential AI disruptions and proactively connect voters with accurate information. Individuals can also take precautions by using reliable government (.gov) sources to confirm their voter registration status and election day information.
Future of AI in Elections: Considerations for Industry and Policymakers
Both predictive and generative AI tools can be used with good or bad intentions. The widespread availability of AI has increased scrutiny on how generative and predictive AI tools can be used to influence voters and shape election outcomes. While more research is needed to fully understand the impact of AI, it is important to identify and mitigate any negative effects on voters and voter turnout. To do so, voters, policymakers, and industry must grapple with pressing questions about the future of AI in elections, including:
- What use, transparency, and disclosure requirements are needed to inform voters about and protect them from both predictive and generative AI systems used in the electoral and campaign process?
- What additional research is needed to fully understand the impact predictive and generative AI use is having on the electoral process—and, consequently, facilitate the development of more effective strategies for informing and protecting voters?
- Alongside AI-specific regulations, what additional protections—such as data privacy, algorithmic transparency, and civil and human rights—are needed to safeguard individuals and communities?
- As AI and other disruptive technologies proliferate, how can industry, government, and civil society foster an environment that encourages free, open, and fair elections?