Utilizing the Online Crowd: How Swifties Could Keep Us Safe From AI

Article In The Thread
Taylor Swift performing her Lover Set at her Eras Tour with an overlay of the X logo.
Paolo Villanueva/Flickr.com and Omar ElDeraa/Shutterstock
Feb. 2, 2024

The spotlight on AI-manipulated media just got a whole lot bigger. While an AI-generated robocall mimicking President Biden’s voice discouraged voters to turn out across New Hampshire, non-consensual, deepfake sexual images of Taylor Swift circulated on X, formerly known as Twitter. These events not only highlight why it’s urgent for Congress to tackle AI’s use in disinformation, they also revealed creative and effective ways to combat the proliferation of deepfakes in the meantime.

AI’s potential impact on our access to trustworthy information and how it amplifies the reproduction of non-consensual sexual images at unprecedented scales are well documented. Even before videos of Nancy Pelosi emerged in 2019 in which she appeared to slur her words, there was already concern about the wide availability of tools that could manipulate the media of public figures. As far back as 2013, scholars like Safiya Noble showed how women of color specifically have been the victims of digital sexual exploitation through algorithms that reinforce sexualized notions and images in its search results. This sexual exploitation can now be bolstered by consumer-level AI tools that enable face swapping—and the technology is only getting better.

The key to countering the disinformation that spread this past week was that when the public—and in particular the well-known Swifties—was confronted with this reality, Taylor Swift’s supporters sprang to action. They did so in ways that can teach us a lot about digital empowerment; the importance of the “crowd” in surfacing, combatting, and eliminating this content; and the power of mass mobilization in pushing tech companies and lawmakers to act.

“Perhaps it’s the humans—not the machines—that will save us from a future in which misinformation or disinformation victimizes and erodes all sense of trust and safety.”

Much of last year’s conversation on AI centered around the warnings from industry leaders about the technology’s societal risks. What ensued was a year of convenings by global leaders on how to best address these risks and, in theory, regulate the development of AI to prevent them. In the end, a mix of regulatory and voluntary efforts have left us in limbo.

Both the EU through its AI Act and the United States through a presidential executive order have taken some measured steps to begin regulating and harnessing the power of AI. Tech companies have agreed to voluntary commitments through efforts from the Biden-Harris administration. But the truth is that none of these things could prevent what happened this week to Joe Biden and Taylor Swift. And so, where do we go from here? What can we take forward from all of this?

While the internet took note of the Joe Biden deepfake, it was abuzz with the creation of digitally faked sexually explicit images of Taylor Swift. Undoubtedly, the nature of the content drove much of the attention, but the pop star’s powerful fanbase, the Swifties, also helped. Known for their devotion, high engagement levels, and creativity, Swifties went on the offensive to defend Taylor Swift. The relationship between Taylor Swift and Swifties show that her knowledge of her fans, personal interactions, and ongoing digital engagement through coded messages, or #Taylurking, has enabled her to cultivate and grow her online following. This form of digital community building impacted the response to the fake pornographic images of her that emerged.

The internet, and AI, is driven by the crowd. And this crowd did what was right for this situation. In addition to finding and identifying the source of the content, Swifties rallied to flood X with #ProtectTaylorSwift tweets that essentially drowned the images out. At the same time, this raised the visibility needed to force companies like X and Microsoft to take action. X then took the step to block searches in order to curb the spread of the images.

The reality is that this type of concerted action can’t be expected for every instance of non-consensual sexual images. And we can’t expect that X, or other sites, will cut off searches swiftly in the future. It’s also unfortunate that it took the likes of Taylor Swift being victimized in this way to elevate the issue and even force the White House to make a statement on the matter. Lawmakers will also need to act to curb the use of AI in creating and disseminating non-consensual sexual images. This wouldn’t be the first time that Swift fans forced Congress to act. But in the meantime, we can develop strategies and tools to do as the #Swifties do: Find the source, drown out the content, and raise visibility to drive action from large social media platforms that can’t or won’t take other steps to curb this content voluntarily.

This moment shows that more can be done, and that perhaps it’s the humans—not the machines—that will save us from a future in which misinformation or disinformation victimizes and erodes all sense of trust and safety. We must not only broaden engagement around AI, it’s critical that we invest in technology solutions that strengthen crowd-sourced identification and quicker responses to unsolicited sexual content. Alas, this story is a reminder that, even with the imperfect online tools available to us, mass mobilization that protects people, pressures companies, and shores up trust and safety is still possible. And we need to see more of it in service of all, especially the less powerful among us.

You May Also Like

Protecting Democracy in an Age of Deepfakes and Disinformation (The Thread, 2024): During the world’s largest election year in history, a focus on tech policy guardrails that protect democracy is vitally important.

Unlocking a Just AI Transition (The Thread, 2023): To avoid the worst harms of AI and worsening inequalities, global AI governance needs to prioritize not just safety, but AI justice.

Indirect Swarming and Its Threat to Democracies (Open Technology Institute, 2024): Online harassment called “indirect swarming” threatens democracy and people’s safety. Platforms should use a new method to combat the issue.


Follow The Thread! Subscribe to The Thread monthly newsletter to get the latest in policy, equity, and culture in your inbox the first Tuesday of each month.