DHS Must Continue Publishing Its AI Inventory for the Sake of Our Rights and Safety
Blog Post

April 1, 2025
During the Biden presidency, the federal government’s use of AI grew rapidly. This growth was encouraged by the administration’s belief in the technology’s potential for productivity, innovation, and security so long as there was a concerted effort to mitigate its risks. As federal agencies experimented with AI risk assessments and chatbots, the Department of Homeland Security (DHS) emerged as one of the most prolific adopters of AI (ranking third behind the Department of Health and Human Services and the Department of Veterans Affairs).
DHS's use of AI affects wide-ranging aspects of life, including cybersecurity, travel, immigration, natural disasters, and national security. While the agency’s AI systems are heavily used in interactions with people immigrating to the U.S., the average American may have already unknowingly engaged the systems when an automated license plate reader scanned their car, a touchless airport unit verified their ID, or a tool interpreted their social media. This mass collection of personally identifiable information (PII)—names, addresses, biometrics—is consequential. Unreliable, biased, or improperly used AI tools can subject more people to hardship (unfair scrutiny, detention, restricted travel); compromise sensitive, life-altering information; violate freedom of expression; and misuse public funds.
DHS and other federal agencies responded to these risks by providing valuable transparency through annual AI use case inventories from 2022 to 2024. These inventories reported on their projects’ implementation, benefits, and gaps. DHS’s 198 known AI use cases in 2024 (not including classified tools) reveal an agency expanding its surveillance powers through various ways, including facial recognition, text analytics, risk assessments, and data aggregators. While this particular inventory is an important improvement in transparency, it also leaves key questions unanswered. As the Trump administration commits to expanding DHS’s mission, and immigration is increasingly used to justify wider surveillance, DHS must maintain inventories that allow policymakers and civil society to hold the agency accountable.
What Policies Led to AI Use Case Inventories in the First Place?
Public inventories of federal AI use cases exist because of landmark provisions adopted by both the first Trump administration and the Biden administration, including Executive Order 13960 (2020), the AI in Government Act (2020), Advancing American AI Act (2023), and Executive Order 14110 (2023). As the first Trump administration recognized the government’s crucial role in “foster[ing] public trust…and protect[ing] privacy, civil rights, civil liberties,” it instructed federal agencies to inventory how they have been using AI. The Biden administration expanded this framework via EO 14110 and by implementing guidance that required agencies developing safety and rights-impacting AI to conduct impact assessments, real-world testing, and independent evaluation as preconditions for deployment.
As defined by the White House Office of Management and Budget (OMB), safety-impacting AI can affect “human life or well-being, [the] climate or environment, [and] critical infrastructure or strategic government assets.” At DHS, this category includes automated identification of suspected contraband in suitcases and facial recognition. Rights-impacting AI relates to civil rights, civil liberties, privacy, or access to public services. DHS’s risk assessments on individuals, translations for legally binding processes, and biometric/social media monitoring all fall into this category.
What’s Included in DHS’s 2024 Inventory, and What’s Left Out?
DHS’s use cases present higher-than-average risk to privacy and civil liberties, but the agency’s tendency to narrowly define their impact on safety and rights reduces internal scrutiny. In 2024, DHS initially classified 33 percent of the agency’s AI uses as “safety- and rights-impacting.” However, DHS downgraded the risk assessment of several tools used in immigration and ultimately concluded that only 20 percent of their cases fell into this category. As the Brennan Center for Justice noted, “OMB’s guidance focuses on individual impact, downplaying the systemic context in which these tools are used.” Despite DHS’s narrow approach, this percentage outpaces the average 13 percent across other agencies.
There are several reasons why DHS’s AI use puts privacy and civil liberties at higher risk. The Department’s AI relies on the collection, storage, and use of PII, such as biometrics and biographical information. Using PII without comprehensive privacy impact assessments increases the risk of re-identification, data breaches, AI systems with biased outcomes, and uneven targeting of certain populations. Of the five agencies reporting the highest number of AI uses DHS reported the second-highest rate of PII use at 23 percent. Another 63 percent of cases, mostly new or retired projects, did not label their use of PII.
Our analysis of DHS’s 2024 inventory yields three key insights about the agency’s use of AI, accompanying protections, and the impact on people’s rights and safety.
1. DHS asks robust questions about impact but often leaves out answers. There’s been a positive trend in federal AI use. Since 2022, the breadth and depth of questions federal agencies employ to evaluate the tech has grown. While the 2022 DHS inventory only asked for a product's summary and development stage, the 2024 version commented on deeper questions: adverse impact, mitigation of disparities across demographic groups, and appeal processes for those impacted. However, these three important questions were left unanswered in 90 percent of cases for reasons that are unclear. This missing information hampers meaningful transparency. The public and Congress deserve to know not only what AI products are operated by DHS but also how those products perform.
2. DHS’s contracted use cases potentially lack transparency and raise concerns about data protection. Only 19 percent of all projects reported complete or widely available internal documentation about their AI models’ training, evaluation data, and trustworthiness. This information gap could hinder DHS’s ability to identify and remedy problematic AI applications. Additionally, only five percent of use cases were developed entirely at DHS. The vast majority of tools were sourced out to contractors, and only 16 percent of these contracted cases reported information about models’ performance metrics. While DHS may be able to request this information on a case-by-case basis, this retroactive approach to vetting AI models contravenes best practices and risks unsafe uses of data. The inventory also does not clarify how the data that’s fed into models is stored and secured or the level of access granted to contractors. One Customs and Border Protection (CBP) social media tracking tool reportedly maintains PII for 75 years, raising the risk of data breaches of such information that cause serious harm.
3. DHS’s automated decision-making warrants greater scrutiny. DHS hinders the public’s ability to identify AI use cases that involve automated decision-making. Several civil rights and immigrant advocacy groups have cautioned that predictive AI should not be used to determine legal status, officer deployments, detentions, or deportations due to the elevated risk of inaccurate and biased outcomes. Even though at least 58 projects’ descriptions mention automation, the inventory only lists one project as “significantly impact[ing]” rights or safety without human intervention. This is a vague determination that could understate automated inputs’ potential impact on human rights and safety. Recent pieces of research suggest, for example, that AI inputs heavily sway a human decisionmaker’s ultimate call, entrenching bias and conformity in the process. For example, DHS’s Hurricane Score uses machine learning to calculate the risk that an immigrant going through court proceedings will fail to appear at them. Officers use this score alongside other information to justify detaining or surveilling migrants, but it's not clear how heavily they weigh the AI’s contribution.
Another point of concern is that none of the use cases offer mechanisms for individuals to appeal or contest AI-fueled decisions. Individuals also have few opportunities to opt out of having decisions made about themselves with AI inputs. Their opt-out options are mostly limited to airports or checkpoints. Even so, a bipartisan group of senators complained last year that opting out of the Transportation Security Administration’s (TSA) facial recognition was “confusing and intimidating.” Abuses of power multiply in the absence of redress and opt-out mechanisms.
Next Steps to Improve Oversight and Transparency of DHS’s AI Use
The Trump administration swiftly rescinded President Biden's Executive Order 14110 and issued EO 14179, which is titled “Removing Barriers to American Leadership in Artificial Intelligence." This order heralds a significant shift in how the federal government develops, deploys, and reports on the use of AI. EO 14179 gives officials implementing the order broad discretion to revise or rescind implementing policies and practices. It would be a mistake for DHS to respond by abandoning public safeguards, responsible AI innovation with research partners, and rigor in procurement and contracting. Preserving meaningful transparency remains essential to building public trust, as the Trump administration itself acknowledged in its 2020 executive order.
As thirteen research and public interest organizations recently wrote to the Office of Management and Budget (OMB), the repeal of EO 14110 does not prohibit agencies from publishing use case inventories. As DHS continues to ramp up its use of AI, the agency should continue to release its inventory of AI use cases. While the 2024 inventory was a notable advancement in disclosure, the agency must ensure future iterations address ongoing concerns about surveillance, data privacy, and public safety. In particular, DHS should provide substantive answers to existing inventory questions for each AI project, including queries about adverse impact, model documentation, the use of demographics as a variable, and appeal processes for individuals impacted by AI.
Alongside publishing the inventory, DHS should improve transparency about the development process. More public explanations about how the Department performs pre-deployment testing of AI tools can prevent the implementation of problematic and ineffective tools. Other agencies, like the Department of Housing and Urban Development (HUD), require that projects pass safety evaluations before graduating to the next development stage.
Last, and certainly not least, DHS must preserve and augment the capacity of its Privacy Office and the Office of Civil Rights and Civil Liberties (CRCL). We are concerned about cuts to DHS’s oversight offices, including CRCL, the Office of the Citizenship and Immigration Services Ombudsman, and the Office of the Immigration Detention Ombudsman. A recent report from the DHS Office of the Inspector General warned that from 2020 to 2023, DHS faced “resource constraints” and weak governance processes “that prevented them from completing the actions necessary to monitor the Department’s AI.” This gap is alarming and will only be exacerbated by dismantling a “statutorily-required position” like CRCL.
Though imperfect, DHS’s AI inventory is an essential element of public accountability. We hope it will be preserved, not discarded.