Unpacking the White House’s Executive Order on AI
Blog Post
Alex E. Proimos / Flickr
Nov. 10, 2023
Last week was defined by big-ticket U.S. government activity on artificial intelligence (AI). On October 30, President Biden issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The document is a sweeping, hundred-plus-page effort that directs federal agencies to pursue multiple policy objectives central to the responsible development and use of AI. Executive Order 14110 (“the EO”) focuses on directing federal agency activities, but its effects will be felt throughout our governance ecosystem, including government, the private sector, academia, and civil society. The order instructs federal agencies to advance key policy objectives, including ensuring AI’s safety and security, promoting responsible innovation and competition, supporting American workers, advancing equity and civil rights, protecting consumer interests, safeguarding privacy and civil liberties, and promoting global cooperation on AI governance.
Shortly after President Biden signed the order, the Office of Management and Budget issued a draft of its implementing guidance (“OMB guidance”) for public review. Administration officials have emphasized that these steps constitute the “most significant” action on AI that any government has undertaken. Whether or not one agrees with this assertion, the comprehensive and ambitious nature of the Biden administration’s effort to alter the national and global governance landscape is hardly up for debate.
Notable Elements in the Executive Order and the OMB guidance
What should we take away from last week’s developments? This analysis outlines key elements of the executive order without attempting to be exhaustive. In particular, we focus on requirements related to safety and security, protecting civil rights and civil liberties, and mitigating harms to people.
Safety and Security
Section 4 establishes a number of requirements on security and safety in AI and is the section of the order that focuses most comprehensively on managing product safety risks. It directs the National Institute of Standards and Technology (NIST) to develop guidelines and best practices “with the aim of promoting consensus industry standards” for trustworthy AI systems. This guidance will include a specific resource on generative AI that will accompany the AI Risk Management framework. Importantly, the EO directs the Department of Commerce to create a reporting framework for companies developing dual-use foundation models that could pose security risks. The Department of Commerce is also required to assess the risks posed by synthetic content and develop guidance for watermarking and authenticating U.S. government digital content. Additionally, Section 4 prioritizes managing AI-specific risks to critical infrastructure and cybersecurity and the intersection of AI and chemical, biological, radiological, and nuclear threats.
Addressing Harms to People
Multiple sections of the EO take a people-centric approach to discussing potential harms from AI systems. Section 8 focuses on protecting “consumers, patients, passengers, and students” from a range of potential harms that arise from AI, including fraud, discrimination, and threats to privacy. It directs agencies to address these threats across various sectors of the economy, including healthcare, transportation, and communications networks. Section 6 reflects the Biden administration’s focus on supporting workers and ensuring employees’ wellbeing through the significant economic shifts that AI will engender.
Protecting Civil Rights & Civil Liberties
The EO focuses considerably on the need to center civil rights and civil liberties in an AI governance regime. Section 7’s focus on advancing equity and civil rights builds on the White House’s Blueprint for an AI Bill of Rights issued last October. It directs the Attorney General to address civil rights violations and discrimination related to AI with a focus on the use of AI in the criminal justice system. Section 7 also directs agencies to prevent and remedy discrimination and other harms that could occur when AI is used in federal programs and to administer benefits. Several agencies are directed to take actions to strengthen civil rights enforcement in various sectors of the economy, including housing, financial services, and federal hiring practices.
Section 9 focuses specifically on mitigating privacy risks that arise from large-scale data collection and AI models’ inferences about people. In a welcome move, the EO directs federal agencies to invest in privacy enhancing technologies and methods—such as differential privacy—to ensure that we can reap the benefits of advanced analytics while limiting the risks to people’s privacy. Additionally, the White House’s messaging—including its accompanying fact sheet for the EO—included a push from the President to Congress to “pass bipartisan federal privacy legislation to protect all Americans.”
The accompanying OMB guidance applies to all federal agencies’ use of AI except for national security systems. The guidance provides further detail on the EO’s requirements that each agency designate a Chief Artificial Intelligence Officer, remove barriers to responsibly using AI, and submit AU use case inventories to OMB, among other stipulations. Perhaps most significantly, the OMB guidance establishes two important categories of “safety-impacting AI” and “rights-impacting AI.” These designations each are accompanied by their own requirements and minimum practices, which include:
- Complete an AI impact assessment
- Test the AI for performance in a real-world context
- Independently evaluate the AI
- Conduct ongoing monitoring and establish thresholds for periodic human review
- Mitigate emerging risks to rights and safety
- Ensure adequate human training and assessment
- Provide appropriate human consideration in decisions that pose a high risk to rights or safety
- Provide public notice and clear documentation through the AI use case inventory
The guidance goes on to establish additional minimum practices for rights-impacting AI:
- Take steps to ensure that AI will advance equity, dignity, and fairness
- Consult and incorporate feedback from affected groups
- Conduct ongoing monitoring and mitigation for AI-enabled discrimination
- Notify negatively affected individuals
- Maintain human consideration and remedy processes
- Maintain options to opt out where practicable
Key Takeaways
Stepping back, here are a few broader observations on the impact of the EO and the draft OMB guidance.
- Values, ethics, and democratic principles are front and center. The EO is unapologetic about enshrining values and ethics as an explicit part of U.S. government policy and oversight. The importance of safety, addressing bias and equity, and protecting civil rights and civil liberties are emphasized, and federal agencies are directed to take concrete actions with these values as guideposts. In doing so, the administration has lent substance to a U.S. vision for AI governance grounded in core democratic tenets and laid out a rough template that Congress could adapt. This approach is also a signal to U.S. companies and the international community—including both partners and adversaries—that the U.S. governance approach will focus on the relationship between AI and democratic health.
- The EO puts innovation and mitigating harms on an equal footing. The order breaks with a long U.S. tradition of adopting a largely laissez-faire approach to prioritizing innovation and focusing on risks at the margins. The administration’s decision to focus squarely on harms is an acknowledgment that developments in AI have the potential to fundamentally reshape societies and economies. The EO sends a clear message that responsible development, prioritizing safety, and protecting rights should be integral parts of governing AI systems—not ancillary issues considered as an afterthought to the race to innovate. Given the administration’s rhetoric over the last year and more, this focus isn’t surprising—but it is nonetheless significant within the context of a historical U.S. government approach to emerging technologies.
- The EO and the accompanying OMB guidance embrace a focus on both risks and rights. The EO builds on the important foundations laid by the Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework, which adopt rights- and risk-based governance models, respectively. The administration’s approach rejects a false choice between these models and attempts to give both full treatment within a single directive. In doing so, both the EO and the draft OMB guidance demonstrate to Congress that it is possible to embrace both the human-centric, rights-based approach and a product safety approach.
- Both the EO and the OMB guidance reflect the importance of public participation in AI governance. The EO reflects inputs from civil society that shaped the administration’s Blueprint for an AI Bill of Rights, which prioritize a rights-based approach and a focus on ensuring that AI benefits the most vulnerable groups without disproportionately harming them. The EO incorporates key aspects of the voluntary commitments that companies announced at the White House, which, unfortunately, did not reflect meaningful civil society input. The administration appears focused here on ensuring meaningful public input by inviting public comments on OMB’s draft guidance. This decision ensures that public inputs can shape the implementation of a landmark executive order. OTI looks forward to submitting more detailed comments on the guidance.