In Proposition 25 Vote, Californians Reject A Discriminatory Alternative To a Discriminatory System
Blog Post

Shutterstock.com
Nov. 18, 2020
On Election Day, Californians rejected Proposition 25, the controversial initiative that presented voters with a difficult choice: to uphold or to repeal state legislation which would have replaced cash bail with algorithmic risk assessments. Cash bail is a longstanding pretrial system in which money is required as a guarantee that a criminal defendant will return for trial or hearings—effectively criminalizing poverty. Numerous studies have demonstrated that cash bail inflicts disproportionate harm on communities of color, putting it in the crosshairs of many criminal justice reform groups, who see its eradication as an important part of addressing the racial inequities that plague the system. In voting down Proposition 25, California upheld cash bail, but avoided the dangerous adoption of algorithmic risk assessments that would have trapped defendants in a technocratic system just as racially biased and flawed, if not more so.
Senate Bill 10 (SB 10), the state legislation in question in Proposition 25, passed in 2018 and would have made California the first state to eliminate cash bail by instituting pretrial algorithmic risk assessments in its place. After a number of amendments, the legislation would have also granted judges and pretrial service agencies powerful discretion that could massively increase the detention of vulnerable groups, rather than lower incarceration rates. Despite this, SB 10 passed easily in the state legislature and was signed into law by former California Governor Jerry Brown. Shortly after its passing, however, a national coalition of pro-cash bail groups began petitioning the law and quickly secured enough signatures to make SB 10 a veto referendum, Proposition 25, on the state’s 2020 ballot.
Somewhat disturbingly, SB 10 and Proposition 25 entered the political arena years after heavily publicized exposés of the inevitable bias in machine learning in the pretrial risk assessment context. For example, in 2016, ProPublica found that Equivant’s (formerly Northpointe) Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm—a popular tool used across the nation by judges, probation officers, and parole officers to predict the recidivism risk of a defendant—was demonstrably biased against Black defendants. There are a number of causes for this racial bias, including that the datasets of past bail decisions that are used to train the machine learning algorithms reflect historical societal prejudices (i.e. Black people are detained at higher rates than white people even when charged with comparable crimes). During the machine learning process, the algorithm is trained on this data and learns these patterns of bias against Black defendants, replicating the same historical trends of detainment in its automated output.
Even when researchers have tried to make algorithms fairer by manipulating the data, augmenting the algorithm’s design, or retroactively attempting to account for potential bias, it is well documented that “fair” outcomes in justice decisions are extremely hard to achieve via machine learning algorithms. A critical reason for this is that there are many competing definitions of “fairness,” and it is mathematically impossible to satisfy certain definitions at the same time. Another concern with risk assessment tools is the lack of transparency regarding how a defendant’s “risk indicators” are weighed against each other in the algorithm. Since many of these technologies are proprietary, the risk assessment process is often opaque and we simply do not know exactly how an algorithm determines a defendant’s risk score. When algorithms are tasked with important decisions, like whether a defendant is released from pretrial detention, there must be transparency regarding how the determination was made in order to ensure a fair process and provide the ability to challenge that decision through due process.
COMPAS is only one of many pretrial risk assessment tools used across America; there are a number of other pretrial tools out there, including those created by for-profit companies like Equivant, non-profit organizations, and even government entities. Criminal justice reform advocates across the political spectrum are conflicted when it comes to these tools. In 2014, New Jersey replaced its cash bail system with a similar recidivism prediction algorithm known as the Public Safety Assessment (PSA), created by the Laura and John Arnold Foundation, a non-profit that advocates for progressive criminal justice reform. New Jersey adopted the PSA to replace cash bail at the urging of the Pretrial Justice Institute, a criminal justice reform organization that has advocated for ending cash bail. However, this past February, the Pretrial Justice Institute reversed their endorsement of the PSA algorithm, and acknowledged that algorithmic tools like these should not be used in bail reform because they perpetuate racial disparities; the jurisdictions that implemented the PSA saw that inequitable demographic proportions of Black and white detainees remained the same—50% Black and 30% white. If even organizations aiming to better protect criminal defendants from pretrial injustice were unable to create and implement a fair algorithmic tool that did not discriminate against Black people, policymakers should seek other alternatives rather than trying to “fix” these inherently unfair technologies.
As advocates have repeatedly pointed out, cash bail is an immensely unjust mechanism of the criminal justice system, as it effectively criminalizes poverty and disparately harms communities of color, and should therefore be eradicated. But it should not be replaced with another discriminatory system, especially one that relies upon the faulty notion that technology is inherently neutral, objective, and unfailingly correct. Accurately predicting a defendant’s recidivism risk is an extremely difficult social task that humans have struggled with for decades. Attempting to automate this deeply complex and morally fraught process is a fundamentally flawed approach to criminal justice reform that only exacerbates existing racial inequities and bolsters dangerous technocratic practices. For these very reasons, in July 2018, OTI joined 100 other advocacy and civil rights groups in releasing a statement of concern urging jurisdictions to stop using pretrial risk assessment tools. Earlier this year, the ACLU of Northern California revoked its support for SB 10, and Human Rights Watch and civil rights groups including the California State Conference of the NAACP opposed the referendum.
Proposition 25 cornered California voters into choosing between two racially biased systems that were different in kind, not degree. Those wishing to put an end to the predatory commercial bail industry favor a number of other, non-discriminatory reforms, ranging from transitioning to unsecured or partially secured bonds instead of cash bail, releasing non-repeat defendants charged with non-violent crimes, and overhauling and deconstructing the entire current carceral system. Proposition 25 would have eliminated cash bail at the expense of subjecting defendants to an algorithmic risk assessment system just as racially discriminatory as its predecessor. Rather than concealing injustice under a false veneer of technological objectivity, policymakers should implement serious alternatives to cash bail that do not perpetuate violence against communities of color.