What Should U.S. Policymakers Know about the AI Act (So Far)?
Blog Post
Shutterstock.com / symbiot
Dec. 15, 2023
Thanks to the central role that generative AI has played in the public imagination, AI governance was a dominant tech policy topic in 2023. The provisional passage of the European Union’s marquee AI legislation, the EU AI Act, is a fitting capstone moment for the year. It was preceded by two notable developments in the United States—the White House’s executive order on AI and the internal chaos at OpenAI with Sam Altman leaving and rejoining the company in a span of five days. Just as U.S. policymakers have tracked the EU and its AI Act, these U.S. developments were on EU policymakers’ minds as they deliberated in closed-door trilogue negotiations that concluded with the AI Act’s provisional passage.
We don’t yet have any text from the legislation to react to, and the text itself will likely undergo changes. But the political agreement—encompassing safeguards, obligations, bans, fines and complaint mechanisms—means that it is overwhelmingly likely the EU will eventually have an AI Act. How did we get here, and what’s next? In this post, we break down the timeline and broader context of the AI Act and briefly discuss the implications for policymakers in the United States.
EU Policymaking Explained
To understand the AI Act’s journey, one must know the key players in the EU policymaking process. The EU Commission—composed of 27 Commissioners from each member state—is an executive body that proposes legislation, implements decisions, and upholds EU treaties. The Parliament is a legislative body, with MEPs from constituencies across the EU. The Council of the EU is a legislative body composed of national government representatives. Finally, the European Court of Justice is the judicial body responsible for providing authoritative interpretations of EU law.
Ordinarily, the EU’s legislative process starts with the European Commission proposing a piece of legislation, which is then considered concurrently by the two legislative bodies. The Council works through specialized working groups to decide its general approach, usually before the Parliament comes to its own position. Specific parliamentary committees lead the in-depth discussion on the bill, choose a team to negotiate with the Council, and elect a “rapporteur” to lead the committee report and suggest amendments.
The rapporteur’s report is presented to the full parliament, which then develops a position. Normally, the two co-legislators go back and forth until they agree on a position. However, through a “trilogue”—an informal mechanism that has been used a lot more lately—the Commission, the Council, and the Parliament can engage each other at once to come to an agreement. The trilogue is the most secretive part of EU policymaking and typically takes place in closed-door sessions. Contentious legislation can involve several trilogues, just as we have seen with the AI Act’s development.
The AI Act’s Beginnings
When the AI Act was proposed in 2021 by the Commission, it focused on AI as a broad category, and its approach was rooted in European product safety legislation. As such, the draft honed in on the risks inherent in AI products. OTI analyzed the original proposal at the time, outlining the obligations associated with each tier of risk (unacceptable risk, high risk, and low or minimal risk) and explaining the Act’s potential shortcomings.
Then, in November 2022, generative AI burst onto the scene withOpenAI’s release of ChatGPT. Its breakthrough, both technologically and into the zeitgeist, made generative AI the hot topic du jour. As a result, the EU Parliament decided while drafting its version of the Commission’s AI Act to add specific legislative language about generative AI and, more broadly, foundation models—a shorthand term for AI models trained on large data sets and designed to be adapted to other applications.
The Parliament’s proposal included eight significant mechanisms that foundation models had to develop for pre-release compliance, including ensuring appropriate levels of performance, predictability, safety and cybersecurity, monitoring and mitigating environmental risk, and technical documentation. In addition, the Parliament’s draft required generative AI providers to fulfill additional obligations, including transparency protections, safeguards about the legality of generated content under EU law, and copyright guardrails.
U.S. Movement
Meanwhile, in the United States, the movement on AI governance gained speed, but the destination remains uncertain. Senate Majority Leader Chuck Schumer (D-NY) created a group of legislators that focused on assembling informational sessions called AI Forums, which initially featured a majority of industry participants and were subsequently broadened to incorporate participants from civil society. At the same time, bills dealing with algorithms and some specifically applicable to generative AI were introduced in Congress. However, a failure to advance bipartisan comprehensive privacy legislation via the American Data Privacy and Protection Act (ADPPA), in the 117th Congress, amounted to a missed opportunity to establish a useful foundation for AI governance rooted in privacy protections and algorithmic accountability.
The most decisive action taken in the United States has been Executive Order 14110, and Office of Management and Budget’s (OMB) accompanying Draft Memorandum On Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.These executive actions establish rules, processes, and areas of study for federal agencies in the development (including procurement), use, and deployment of AI. Both EO 14110 and the OMB guidance prioritize the need for agencies to innovate within guardrails rooted in both a risk- and rights-based approach to AI governance.
AI Act’s Final Act
With the EO fresh in their minds, EU legislators were determined to resolve several sticking points as part of the last trilogue, as the political calendar would have delayed the discussions to after 2024. Among other loose ends, that trilogue tackled two major concerns: how to regulate “open” and “closed” generative AI (and foundation models) and whether or not to ban the AI surveillance systems EU member states were eager to use or continue using for law enforcement or military purposes. The latter category includes AI-enhanced staples of state surveillance like racial profiling, predictive policing, real-time (or near real-time) remote biometric identification, and tech based on the discredited science of emotion sensing, among others. On the former concern, member states lead by the troika of France-Germany-Italy had suggested a full self-governance regime in place of the Parliament’s set of obligations, allegedly because the biggest European AI contender, a Paris-based company called Mistral AI, would potentially have some sort of upper-hand.
With EU member states intent on diluting the obligations for both generative AI and the use of AI in surveillance, the Parliament held firm for more than 30 hours before agreeing on a preliminary and broad political deal. The trilogue is not public, and the text is not yet finalized. First, “technical meetings” will hammer out the details of what was agreed to in principle, and then lawyer-linguists will fine tune the agreement. The agreement will then undergo several rounds of internal ratifications in both the Parliament and the Committee before the members of both legislative bodies finally vote on it. Even then, the Commission’s implementing acts will define significant details and European standards bodies will need to take up many substantive matters.
To say that aspects of the AI Act are still in flux would be an understatement. But reporting and official press releases paint a fuzzy picture of compromise in which generative AI and foundation models must meet transparency requirements that apply to all models. High-risk models will need to comply with additional obligations. Self-governance mechanisms, like codes of conduct, will serve as stopgaps until the standards are in place—although open-source models are mostly exempt unless they fall into the high-risk category. Further compromise was reportedly reached in which most of the AI-enhanced surveillance technologies were banned, but exemptions for national security were also included.
In short, there’s a lot to analyze going forward, and it’s too soon for definitive takes on the AI Act. But we do know that preliminary political agreement on the broad parameters of the AI Act means the conversation for how to regulate AI moves squarely back to the U.S. Lessons from the EU experience hopefully will spur U.S. policymakers to incorporate both the rights-based perspective of the Blueprint for the AI Bill of Rights and the risk-based approach that predominates most global legislative and industry approaches. In addition, U.S. policymakers will need to tackle the question of how to ensure that the United States encourages the responsible development of a strong open-source ecosystem. As more issues are pushed to standards-setting bodies, it will be important to build the capacity necessary for civil society to be meaningfully involved in developing technical standards.