HUD’s New Rule Paves the Way for Rampant Algorithmic Discrimination in Housing Decisions

Blog Post
flickr.com
Oct. 1, 2020

Last month, the Department of Housing and Urban Development (HUD) issued its final rule on the Fair Housing Act’s (FHA) disparate impact standard, decimating the longstanding protection against discrimination. Disparate impact is a core principle in anti-discrimination law that recognizes that discrimination is not always intentional. Under the disparate impact standard, employers, housing providers, and other entities are prohibited from using policies or practices that have a disproportionate negative impact on protected classes, even if they appear neutral on their face.

Concerningly, HUD’s final rule removes the disparate impact test for most FHA cases involving algorithmic tools. Although these tools create substantial risk of disparate impact discrimination, the final HUD rule makes it nearly impossible for victims of algorithmic discrimination to hold companies accountable, and encourages housing providers to adopt and use discriminatory algorithms. The rule is therefore bound to amplify racial inequality—flouting the very purpose of the Fair Housing Act—at a time when minorities are suffering the worst health and economic effects of a devastating pandemic and calls for racial justice still echo through our streets in response to police brutality.

The Supreme Court’s ruling in the 2015 case Texas Dept. of Housing & Community Affairs v. Inclusive Communities Project affirmed the use of disparate impact claims as a legal tool in FHA cases. There, the Court made clear that disparate impact claims are necessary to achieve “the Fair Housing Act’s continuing role in moving the nation toward a more integrated society.” Rather than clarifying or codifying that decision, this recently finalized rule essentially reverses that decision (and the over 45 years of disparate impact jurisprudence that precede it) where AI tools are involved. Further, because AI tools are becoming increasingly prevalent in housing and lending decisions, and because the rule would also incentivize further adoption of those tools, this rule could render disparate impact claims obsolete in FHA cases, undermining a core principle of the law.

This final rule is based on a 2019 proposal by HUD, which aimed to establish new legal defenses that would permit housing providers to avoid liability under the disparate impact standard in cases where providers assert that they relied on algorithmic models for practices such as credit scoring, pricing, marketing, and automated underwriting. As we outlined in individually filed comments, as well as with a coalition of 23 civil rights and consumer advocacy organizations and individual experts, this proposal failed to account for the risks of algorithmic bias and discrimination, particularly for communities of color and other marginalized groups. During this period, HUD received over 45,000 comments from civil and human rights organizations, data scientists, housing and financial services providers, disability rights groups, and more. While HUD removed the highly controversial algorithmic tool defenses from its final rule, it replaced them with another concerning affirmative defense for practices that “intend to predict an occurrence of an outcome,” which include the use of algorithms. The heightened legal hurdles and this new defense will make it particularly difficult for victims of algorithmic discrimination to bring their cases.

HUD’s final rule establishes potentially insurmountable legal hurdles for victims of discrimination who seek to hold companies accountable for their reliance on biased algorithms. To bring a lawsuit alleging disparate impact, the rule now requires plaintiffs to show facts supporting five new required elements, dramatically heightening the standard needed for a prima facie case (or, to show that the cause of action is sufficiently established, in order for the case to proceed past the pleading stage). This means that before the plaintiff even has the ability to obtain the relevant information from the defendant to present a strong challenge, they must demonstrate these five elements:

  1. The policy or practice is arbitrary, artificial, and unnecessary to achieve a valid interest or legitimate objective such as a practical business, profit, policy consideration, or requirement of law.
  2. The policy or practice has a disproportionately adverse effect on members of a protected class.
  3. There is a robust causal link between the policy or practice and the adverse effect on members of a protected class, meaning the specific policy or practice is the direct cause of the discriminatory effect.
  4. The disparity caused by the policy or practice is significant.
  5. There is a direct relation between the injury asserted and the injurious conduct alleged.

And, in order to establish that a policy or practice has a discriminatory effect, a plaintiff must prove elements two through five above by a preponderance of the evidence—offering evidence that demonstrates their claims are more than 50 percent likely to be true.

The third and fifth required elements particularly create a high bar for plaintiffs where algorithmic tools are involved. The third element, robust causality, will be very difficult for a plaintiff to show, especially at such an early stage and at this evidentiary threshold. Although proving direct causation is a fairly typical requirement in the later stages of litigating discrimination cases, such a showing is extremely challenging when algorithms are involved, and near impossible in the prima facie stage. To show that, in the rule’s language, there is “a robust causal link… meaning the specific policy or practice is the direct cause of the discriminatory effect” will be very difficult for plaintiffs, who will certainly lack access to the inner workings of the relevant algorithm and to the data used to train it. The type of granular information needed to meet this standard would likely not be available to a plaintiff at this stage, if ever, given “trade secret” protections companies claim when it comes to the details of their algorithms. For certain more complex AI tools such as neural networks, even the developer of the tool may not have visibility into all of the features of a certain model, and may be therefore unable to pinpoint the direct cause of a discriminatory outcome. Accordingly, a plaintiff will most likely be in the dark as to whether the algorithm’s underlying logic, its training data, or any one of a number of other areas where bias may creep in, caused the relevant discrimination.

Likewise, showing a “direct relation” between the injury and the defendant’s conduct, as required by the fifth element, will be exceedingly difficult when algorithms have caused the harm. A plaintiff is likely able to show an outcome (statistics) that demonstrates disparate impact, but this requirement demands that a plaintiff have knowledge of the relevant algorithm, if an algorithm is at fault, and potentially even the ability to manipulate that algorithm to show a direct relation. For such manipulation, one would need access to the code of the AI tool at the very least.

Even if the plaintiff is somehow able to meet all five requirements to establish their prima facie case, the new affirmative defense available will make it extremely difficult for victims of algorithmic discrimination to prevail. In HUD’s Supplementary Information provided alongside the final rule, the Department acknowledges that the new defense is an “alternative to the algorithm defenses,” which were previously proposed and met with much criticism. It is not clear how the new defense that HUD introduced in the final rule actually operates, and HUD did not provide the opportunity for public comment on this framing. However, it appears that the new defense excuses algorithms if their impacts are disparate but defendants are able to show that the predictions are accurate and protected classes would actually have less favorable outcomes without them. The new defense provides algorithms with so much lenience for their outcomes that, perversely, it would legally allow most AI-generated disparate impact.

This new defense, combined with strenuous legal hurdles to even bring a case, will make it all but impossible to enforce the Fair Housing Act against any housing provider that relies on algorithmic tools. HUD’s new rule could even encourage companies to use algorithmic tools, even if they are aware that algorithms could generate discriminatory or biased outcomes, as they are less likely to face liability for any disparate impact those algorithms may create, or even have to worry about such analysis. Indeed, the new defense could effectively function as a safe harbor for housing providers that outsource their decisions to algorithms, without any requirement that they audit or otherwise conduct due diligence on those automated tools.

As OTI outlined in our report series exploring how internet platforms use a range of algorithmic curation practices, including ad targeting and delivery and recommendation systems, automated tools can generate harmful results and perpetuate historical biases in a manner that disproportionately impacts communities of color and other marginalized groups. These effects can have particularly significant consequences in housing, employment, and access to financial services. Algorithms used for housing and lending decisions are (most likely) not programmed to intentionally discriminate, but can be discriminatory by their very nature, as they are trained on historical housing data in a country with a long history of housing discrimination and segregation. HUD’s final rule is therefore extremely concerning, as it fails to account for the plethora of existing research on this subject which indicates that the offline risks and results generated by algorithmic systems are very real and can have significant consequences for already vulnerable communities.

Now especially, our government should be working to address systemic racism through more equitable policy, but HUD’s rule does just the opposite, scaling back long-standing protections. Housing policy is inextricably linked with the discriminatory policing that led to the police killings and Black Lives Matter protests that defined our summer. Those protests against racial injustice have demanded that the government and our society address issues of systemic racism, particularly against Black communities—and housing discrimination is a core component of continued systemic injustice.. Racialized policing is itself a byproduct of the very segregation that the Fair Housing Act set out to eradicate. And new tech tools perpetuate discrimination in policing, just as new algorithmic tools (combined with difficult legal standards) perpetuate discrimination in housing. Meanwhile, the COVID-19 pandemic has exacerbated economic disparities, further demonstrating how societal inequities and discriminatory housing practices can influence whether certain individuals have access to health care, transportation, and even food.

Using algorithms—which by nature will perpetuate patterns of discrimination because they’re trained on historical data—for housing decisions will disproportionately hurt communities of color and other vulnerable groups, and HUD’s final rule is therefore a concrete failure to account for ongoing national conversations around systemic racism and equity. For more than four decades, victims of discrimination have used disparate impact claims to challenge policies and practices that disproportionately harm groups that are protected by the FHA, and the law has been a key tool for addressing systemic racism. Unfortunately, housing discrimination is far from eradicated. According to the National Fair Housing Alliance, the number of housing discrimination complaints in 2018 increased by almost 9% from the previous year to 31,202. This is the highest volume of complaints since the organization first began collecting data in 1995 indicating that concerns related to housing discrimination, both related to the use of algorithms and otherwise, are on the rise.

Going forward, we strongly urge HUD to rescind this rule, which largely erases the ability to bring disparate impact claims when AI tools are at fault. Failing to do so would be irresponsible; companies are increasingly relying on AI tools for crucial housing decisions, and vulnerable communities need these legal protections now as much as ever.

Related Topics
Data Privacy Algorithmic Decision-Making