A Year Into the Pandemic, We Still Don’t Know How Well Platforms Are Combating COVID-19 Misinfo and Disinfo

Blog Post
Shutterstock/Vector-bot
April 28, 2021

More than one year ago, the COVID-19 pandemic hit, upending the lives of billions of people. Almost overnight, our relationship with the digital sphere changed dramatically. For many people under social distancing requirements, the online space became our primary means of interacting with colleagues, family, and friends, as well as being our primary source of information in an incredibly confusing time.

Unfortunately, the digital space also became home to a vast amount of misinformation and disinformation around the coronavirus. Stories about fake cures, prevention methods, origin theories, and other related topics became rampant on the internet, with tragic offline consequences. In response, many internet platforms expanded or introduced new efforts to curb the spread of misleading information on their services. Later, many companies touted the effectiveness of these initiatives. However, very few provided concrete data that would allow us to actually evaluate whether and how their efforts had an impact.

As OTI outlined in a June 2020 report assessing how eight internet platforms were responding to COVID-19 misinformation and disinformation, the majority of platform efforts in this area fall into three key buckets: connecting users to authoritative information, moderating and reducing the spread of misleading content, and altering advertising policies to prevent exploitation and the marketing of misleading products and items. In the early stages of the pandemic, many companies understandably noted that they were not immediately able to share data on the impact of these efforts, as they had just been implemented. Now over a year has passed, yet many of these platforms are still not providing meaningful transparency around how these programs operate and what impact they have. This is concerning, as companies continue to implement these efforts and tout their effectiveness.

Currently, only two companies have published any data from their efforts to combat COVID-19 misinformation and disinformation: TikTok and Twitter. TikTok’s latest transparency report includes data on how many videos the company removed for promoting COVID-19 misinformation, how many times its PSAs on COVID-19 and COVID-19 vaccines were viewed, and how many times its COVID-19 information hub was viewed. Twitter reports on the number of accounts it actioned (i.e. the number of accounts it suspended or required the removal of content from) for COVID-19 misinformation. However, neither company’s reports include comprehensive information about how much content the platform labeled for including COVID-19 misinformation. In addition, neither company reports on certain critical information, such as how many ads the company removed for violating COVID-19 policies or how much content was affected by algorithmic interventions (i.e. downranking and in recommendation systems). This data is important because many platforms’ responses to misleading information around COVID-19 are related to ads, and many are based on the use of labeling and algorithmic curation practices. Despite this, we have very little insight into how effective these practices have been.

TikTok and Twitter’s reports are notable, however, in that they are the first corporate transparency reports to place COVID-19-related data front and center. As we outline in our Transparency Report Tracking Tool, most companies that issue transparency reports covering their Terms of Service-related content removals do not even share high-level or aggregate information about misinformation and disinformation removals on their services. Further, those that do often lump this category of content in with others, making granular analysis challenging.

Some companies, including Facebook, have shared some information about their efforts to combat COVID-19 misinformation over the past year, but done so in a disjointed manner which makes holding them accountable difficult. For example, last April, Facebook published a blog post explaining how the company was combating misleading COVID-19 information on its services. The blog post has since been updated multiple times to include periodic data on how many pieces of content the company appended warning labels to. The post also includes information on how many people the company has guided to resources from the WHO and other health authorities through its COVID-19 Information Center and pop-ups on the service.

Thus far, the company has not included any information on COVID-19 removals or efforts in its quarterly Community Standards Enforcement Report. Rather, it has opted to situate these numbers in disparate and difficult-to-find blog posts that are continuously updated but not made clearly available (the April 2020 blog post was updated as recently as February 2021, but still requires in-depth web searching to find). This design choice significantly undermines transparency and complicates efforts by external stakeholders and the public to hold the company accountable for its efforts to combat misinformation and disinformation.

For platforms like Facebook, this lack of transparency is visible beyond their failure to make information easily accessible. For example, while the platform has policies that guide its approach to moderating COVID-19 misinformation, these policies are not part of its core Community Standards. As we note in our report, the policies are instead shared in disparate blog posts that are difficult to locate and string together. This makes it burdensome for users, researchers, and other stakeholders to comprehensively understand how the platform is approaching COVID-19 misinformation.

Facebook’s Oversight Board has expressed similar concerns. After deciding on a case related to COVID-19 misinformation in France, the Board issued a policy advisory statement recommending that Facebook create a new Community Standard on health misinformation, which should clarify and consolidate the platform’s existing rules in one central location. The Board also recommended that Facebook provide more transparency around how it moderates health misinformation.

Over the past year, COVID-19 misinformation and disinformation have taken the online sphere by storm, resulting in increased vaccine hesitancy, injuries due to false treatments, numerous cases of arson, and even hundreds of deaths. Internet platforms are under pressure to do more to tackle this kind of misleading information, and these companies promise they are responding. However, without adequate transparency there is no way to hold these companies accountable for their efforts and identify how these initiatives can be improved.

Related Topics
Artificial Intelligence Content Moderation Algorithmic Decision-Making Platform Accountability Transparency Reporting