A Deeper Look at Facebook’s Second Update of their Civil Rights Audit

Blog Post
Aug. 7, 2019

In June, Facebook released an update on their ongoing civil rights audit that builds upon the company’s first audit report from December 2018. The new report focuses on Facebook’s progress thus far and highlights where it believes further changes are needed. The company began the audit in response to a November 2018 letter from the Leadership Conference on Civil and Human Rights and Color of Change on behalf of over 200 national civil rights organizations. The groups called on Facebook to immediately take steps to fix the damage to democracy and civil society its platform has caused. These harms are well-documented, ranging from providing insufficient protections for vulnerable groups enduring harassment to permitting the rapid expansion of coordinated misinformation campaigns that seek to undermine political and social advances made by marginalized groups.

This report is Facebook’s second update, with the third and final reports slated for 2020. Laura Murphy, a former ACLU director and distinguished civil rights and liberties leader, is leading the audit. The report begins by describing the developments at Facebook since the first audit report in 2018, and the steps the company has taken to address the civil rights concerns described in that report. As Murphy notes, despite some progress, much work remains to address those concerns.

The report includes key updates on Facebook’s practices, and several recommendations for much-needed improvements. Specifically, the report features a new focus on four particular areas: Content Moderation & Enforcement, Advertising Targeting Practices, Elections & 2020 Census, and Civil Rights Accountability Structure. Based on the work that New America’s Open Technology Institute (OTI) has done on these issues, this piece focuses on the first two provisions: Content Moderation & Enforcement and Advertising Targeting Practices.

Content Moderation

Policy and Enforcement Improvements and Updates

This past March, Facebook implemented a new policy aimed at the unchecked spread of white supremacy and white nationalism on its platform. Following input from civil rights advocates, Facebook expanded its policies against white supremacy and hate speech by explicitly banning the praise, support, and representation of white nationalism and white separatism, both of which were previously permitted. While this is a step in the right direction, Murphy finds this change too narrow, as user posts that do not “explicitly” praise or support white nationalism and separatism will stay up. Thus, Murphy recommends that the new policy also prohibit any post that represents white nationalism ideology, even if the terms “white nationalism” or “white separatism” are not explicitly used.

Beyond Murphy’s suggestions, OTI urges Facebook to take additional steps when implementing these new policies. Specifically, moderators should account for context and nuance in this process, so that satire and posts condemning these hateful ideologies are not treated in the same manner.

The report also outlines how, in response to the growing and troubling trend of people using its Events platform to launch intimidation and harassment campaigns, Facebook expanded its existing policies against violence and incitement. The updated policy specifically prohibits any calls to bring weapons to the specified event or location in furtherance of the goal to harass or intimidate individuals or members of a protected group.

Lastly, the audit report further clarifies Facebook’s recent enforcement of their Designated Hate Figures policy that led to permanent bans of Alex Jones, Louis Farrakhan, and more. Murphy explains that amplifying or trafficking hate speech triggers Facebook’s ban proceedings. Moreover, if these figures or organizations go beyond amplifying hate to advocating violence or engaging in hateful or violent acts, Facebook will prohibit other users from praising or supporting them. This determination consists of examining online and offline activity, and assessing signals,including asking Facebook reviewers to determine whether the users are self-described or identified followers of a hateful ideology, have carried out or supported acts of violence based on race or other protected characteristics, and/or have used slurs or other hate speech when describing themselves in the “About” section.

Content Review Improvements

In addition to the various policy changes, this report discusses the steps Facebook has taken to improve its content review process. One of these initiatives was the launch of a set of pilots and internal case studies to assess whether their implementation of policies regarding which speech must be taken down is not so overly broad as to censor content that should remain on the platform. While OTI welcomes this step forward, Facebook should also describe the results of these assessments in their transparency report.

Currently, around 30,000 people make up Facebook’s global “safety and security” workforce, with approximately half of those being content moderators. Many of these moderators are hired for their language or regional expertise, but there is a smaller pool of specalized reviewers for particular content areas like child sexual abuse material (CSAM) and terrorism-related imagery. One of the aforementioned pilot programs aims to revitalize the process by which moderators are assigned content to review by creating a Hate Speech Reviewer Specialization. This pilot program would permit some content reviewers to solely focus on hate speech policies, rather than the other various content policies, in an effort to allow reviewers to build expertise in one specific area and improve the consistency of enforcement.

Another consideration surrounds the widely-documented harms suffered by content moderators who are subjected to countless hours of hateful and violent content. Facebook recognizing this trauma is a step in the right direction, but the non-committal language of “responsibly [limiting] the number of hours that a person should spend reviewing” and being “very aware of the concerns that content reviewers have access to sufficient psychological support” is far too weak.

Murphy notes that one of the most consistently cited complaints in this audit was the high frequency of errors and wrongful removal of posts. These “hate speech false positives” led to a separate investigation. This investigation, although it is unclear who led it, revealed that Facebook failed to account for context in its review process, with satirical or critical posts of hate speech being erroneously removed. Murphy suggests retraining content reviewers to look at the post in its entirety, including the comment or caption that accompanies it, to improve the process and prevent the removal of these hate speech false positives.

In addition, the audit report notes that Facebook is contemplating a change to its process for analyzing hate speech content. Currently, moderators begin by deciding if the content should be removed as a violation of their policy. The review tool then asks moderators a series of follow-up questions about the rationale that led to the decision. Facebook, in an attempt to improve accuracy, is testing the reverse of this process by first asking the reviewer a series of questions that would then lead to a decision on whether the post violated Facebook’s policies and should be removed. Facebook reports that this new process has been welcomed by reviewers, as this guidance improves consistency and facilitates a more refined evaluation.

The Audit Team set forth various recommendations for changes to the Hate Speech, Harassment, and Appeals & Penalties policies. The report urges Facebook to adopt a more inclusive definition of national origin to expand it from a single country to regional origins, as there should be no material distinction between attacks based on a person’s region and those based on their state.

The Audit Team also calls for an express prohibition of any efforts to orchestrate harassment campaigns. In addition, the report urged Facebook to create more protections for those who become the subject of these coordinated floods of hateful messages and posts. Specifically, Murphy invites Facebook to develop methods that would permit users to bulk report or block such efforts, since the current individualized reporting mechanism is too time-consuming and ineffective against this harassment method. The Audit Team also emphasized the importance of protecting activists and journalists, and urges Facebook to commit to working with these groups in order to identify areas where protections are inadequate, and to test new solutions to this ongoing problem.

Last year, Mark Zuckerberg announced a new initiative to create an Oversight Board to adjudicate content moderation issues. Facebook’s proposed independent body would permit users to appeal decisions about their content to an external group. The Audit Team expresses its appreciation of Facebook’s consultation with civil rights groups and its requests for public input on this ongoing effort.

While OTI welcomes this initiative, we urge Facebook to take steps to ensure that the board can help improve Facebook’s appeals process and policy development, as outlined in the comments OTI submitted as part of Facebook’s public consultation. In particular, OTI explained the need to empower the board to provide meaningful input on content policy development, and to protect its independence and legitimacy.

Lastly, the Audit Team announced that the focus of their next report will be on Facebook’s appeals and penalty system. Murphy agreed with the criticism from the civil rights community that the current appeals process lacks sufficient transparency and seemingly issues inequitable penalties ranging from temporary to permanent bans.

Advertising Targeting Practices

Improvements and Post-Fair Housing Act Investigation

The Fair Housing Act of 1968 was enacted to prevent discrimination against people based on race, religion, disability, and other characteristics in connection with housing. This past March, the Department of Housing and Urban Development (HUD) charged Facebook with violating the Fair Housing Act by allowing those advertising housing on Facebook to exclude people on the basis of race, religion, familial status, and more.

This revelation arose at the same time the initial 2018 audit identified three changes that Facebook made to its advertising systems to prevent this exact type of discrimination. Facebook implemented the removal of “thousands of targeting terms” that could be used to target protected characteristics, a new requirement that advertisers affirmatively agree to adhere to Facebook’s non-discriminatory policy, and the creation of a library of active ads so that anyone could view them. The audit report also describes additional changes that Facebook announced as part of the settlement of several discrimination lawsuits. Specifically, Facebook issued a non-binding commitment to make five major changes to its advertising system.

The first commitment is to limit the targeting options offered to advertisers in the market of housing, employment, or credit opportunities. Ads for consumer goods like cosmetics or concert tickets will not be subject to the new, more restrictive advertisement system for housing, credit, and employment opportunities. Murphy notes that targeted ads based on interests like Spanish-language TV channels or ones targeted by ZIP Code will not be permitted in this new more stringent advertisement system, but does not clarify if any of these new limitations will apply to the regular advertisement system.

Second, a new tool will enable all users to search through a complete database of active housing ads posted on Facebook. Users will be able to view information about the ads, including who the advertiser is and the target location, regardless of whether they were a part of the audience targeted by the advertiser.

Third, Facebook will reinforce its anti-discrimination policies and adherence to anti-discrimination laws through a certification process it created in 2018. It is unclear if this certification process was proactive or reactive to the lawsuit challenging discriminatory ads, but Facebook describes it as periodically asking advertisers to review the policies and certify their understanding of and compliance with them.

Fourth, “key” Facebook employees with responsibilities related to advertising will partake in training on fair housing and lending laws, which will be developed in partnership with the National Fair Housing Alliance. In addition, Facebook states they will expand this training to the policy, legal, and engineering teams within a year of this report.

Fifth, Facebook commits to continuing to expand studies of algorithmic modeling, with a particular focus on the potential for unintended bias in algorithms and algorithmic systems. It is a welcome revelation that this process will include works from academics, civil rights and privacy advocates, and the rest of civil society. For example, the studies should include reviewing the work of Dr. Nathalie Maréchal of Ranking Digital Rights, who has written extensively on the problematic nature of targeted advertising.

Lastly, the Audit Team recognizes the concerns from civil society about the problems that Facebook’s ad delivery system may cause. In support of this point, Murphy references a study that reveals how Facebook’s ad systems may inadvertently steer ads to certain groups. Murphy acknowledges the progress made by Facebook, but reiterates there is more to be done.

Through this audit, Facebook has improved its transparency and acknowledged many of its own shortcomings. However, it is important to note that the various commitments Facebook has made are not binding. Thus, it is important to remain vigilant in ensuring Facebook stays true to its word. Even after the final audit report is released, Facebook should engage in continuous dialogue with civil society. In addition, given the growth in its reliance on algorithmic systems, Facebook should devote more funding to private research and continue to support its ongoing internal studies into the potential negative externalities of such systems. OTI hopes this audit and the subsequent assessments the report recommends will help reduce the harms caused by erroneous content moderation and discriminatory advertising.

Related Topics
Transparency Reporting Data Privacy Platform Accountability Content Moderation