Are Platforms Prepared to Tackle Misinformation and Disinformation During the U.S. Midterm Elections?

Blog Post
July 20, 2022

Over the past several weeks, the Select Committee to Investigate the January 6th Attacks on the Capitol has held televised hearings underscoring, among other things, the role social media played in spreading and amplifying harmful conspiracy theories about the outcome of the 2020 U.S. presidential election. Since 2016, we have seen how election-related misinformation and disinformation can sow distrust in democratic institutions, foment political violence, and contribute to voter suppression, which disproportionately impacts communities of color and other marginalized groups.

Before the 2020 election, OTI published a report examining how 11 internet platforms were addressing the spread of misleading election information on their services. We promisingly found that many platforms had expanded and instituted new measures to tackle misleading information. Since then, however, many of these efforts have been rolled back despite the fact that misleading information around the election still circulates online today. This raises significant concerns about whether internet platforms are prepared for the upcoming midterm elections.

Today, we are launching a new scorecard evaluating how 10 internet platforms—Facebook/Instagram, Google, Pinterest, Reddit, Snap, TikTok, Twitter, WhatsApp, and YouTube—are tackling misleading election information on their services ahead of the midterms. The platforms are evaluated against a selection of recommendations included in our 2020 report which we consider baseline best practices that companies should implement. Some of our findings are outlined below.

Important, but limited, progress

Since the 2020 presidential election, online platforms have made some important progress towards addressing the spread of election-related misinformation and disinformation. For instance, all of the platforms we evaluated have instituted policies to address the spread of election-related misinformation and disinformation in organic content and have taken steps to remove, reduce, or label content that has been fact-checked and/or deemed to contain misleading election information. With the exception of Google, all of the platforms we examined offer dedicated reporting features that enable users to flag misinformation and disinformation. These are both substantial improvements from 2020 and mark important steps towards instituting clear and comprehensive mechanisms for taking action against misleading content and accounts.

In the run up to the midterms, many platforms have also continued certain measures, such as partnering with fact-checking organizations, to promote, verify, or refute information circulating on their services. Such efforts help to promote accurate information, empowering users to make informed decisions about elections and voting.

Slow progress in advertising

While internet platforms have made progress in combating misleading election information on their services, there are some critical areas for improvement. For example, there has been little progress on addressing misleading election information in advertising. Despite research underscoring how online advertising can amplify misinformation and disinformation, few platforms comprehensively review and fact-check their advertisements and many lack strong advertising policies that concretely prohibit misleading election information. This is a major area of concern, especially since it is unclear whether platforms like Meta and Google will reinstate the political ads bans they introduced around the 2020 election (and which they claimed were effective).

Additionally, very few platforms publish comprehensive advertising libraries, making it difficult to examine and understand the kinds of ads that spread on these services.

Lack of transparency

We found most companies are actually providing less insight into how they’re preparing for the midterms, compared to 2020. For example, in 2020, many services shared that they were partnering with a range of government and civil society organizations to promote, verify, or refute content. However, it is unclear whether these partnerships—once touted as crucial methods for connecting voters with reliable information—will continue ahead of the midterms. This lack of information suggests that the midterm elections are not a top priority for many companies.

Lastly, platforms have also made disappointing progress in providing adequate and meaningful transparency around the scope and impact of their 2020 election interventions, including with moderating and algorithmically curating content. This lack of data makes it difficult to understand which efforts were actually successful, thus making it difficult to push platforms to institute the most helpful measures ahead of critical events like the midterms.

Conclusions

As midterm elections draw near, the stakes for combating election and voter suppression-related misinformation and disinformation are growing. As our new scorecard indicates, online platforms have made limited progress towards combating election misinformation—such as by establishing comprehensive policies against this information and allowing users to flag it—but still show substantial room for improvement. Efforts to tackle misleading election information in advertising are particularly lacking, and platforms must also demonstrate greater transparency regarding the impact of their election interventions. Taking these additional steps is critical to countering voter suppression, restoring trust in elections, and promoting responsible civic engagement online. With midterms mere months away, platforms need to act now.

Related Topics
Transparency Reporting Content Moderation Platform Accountability