Breaking Down the World’s First Proposal for Regulating Artificial Intelligence

EU Draft AI Regulation Makes Notable Strides to Regulating AI Systems, But Needs Further Work
Blog Post
June 10, 2021

Today, artificial intelligence and machine learning tools are ubiquitous across sectors—used for everything from determining an individual’s credit worthiness to enabling law enforcement surveillance—and rapidly evolving. Despite this, few nations have rules in place to oversee these systems or mitigate the harms they could cause. 

On April 21, the European Commission released a draft of its proposed AI regulation, the world’s first legal framework addressing the risks posed by artificial intelligence. The draft regulation makes some notable strides, prohibiting the use of certain harmful AI systems and reining in harmful uses of some high-risk algorithmic systems. However, the Commission’s proposed regulation displays gaps which, if not addressed, could limit its effectiveness in holding some of the biggest developers and deployers of algorithmic systems accountable. This proposal could set the stage for broader conversations around how AI systems are regulated globally. As a result, it is critical that the Commission works to refine the regulation so that it is feasible and effective.

In terms of scope, the draft regulation covers providers who place AI tools on the market, deploy them, and use them in the European Union, regardless of where in the world the provider is based. The regulation covers the use of algorithmic systems in a broad range of sectors including finance, education, employment, government, and law enforcement. However, it does not cover systems used for military purposes and systems used in sectors that are already regulated by other acts (e.g. transportation). The draft law also seeks to rein in algorithmic systems that fall into three buckets: 1) systems that generate an unacceptable risk, 2) systems that generate a high risk, and 3) systems that generate low or minimal risk. The provisions of the draft regulation would be enforced by national authorities in each EU Member State. A new European Artificial Intelligence Board (EAIB) would also be established to provide guidance to the European Commission, although it would not have enforcement powers.

Prohibited algorithmic systems

The draft regulation sets forth a list of algorithmic systems that are considered to generate an unacceptable level of riskby, for example, violating fundamental rightsand which should therefore be prohibited. This list includes AI-based social scoring systems used by public authorities in some situations, “real-time” remote biometric identification systems used by law enforcement in public spaces (with some exceptions), and systems that can manipulate individual behavior using “subliminal techniques” or that exploit vulnerable groups such as children or disabled individuals in a way that would cause them or another individual psychological or physical harm. These prohibitions are a first step in responding to growing concerns from advocates and the general public that algorithmic systems can be used to manipulate individual behavior, surveil citizens in public, and promote unequal outcomes using algorithmic scoring tools. However, as some critics note, some of the prohibitions are too narrow, especially since they only ban law enforcement use of biometric mass surveillance, and even then, contain major exceptions that would allow for police use of these dangerous technologies.

High-risk algorithmic systems

One of the primary focuses of the draft regulation is on the use of algorithmic systems that the Commission has determined are “high-risk,” as they pose a threat to the “health and safety or fundamental rights'' of individuals. These include algorithmic systems that are used for credit scoring, determining an individual’s eligibility for social benefits, most law enforcement purposes, and immigration and border control.

The draft segments high-risk AI systems into two categories: systems that will be used as a “safety component” of products that will be subject to pre-deployment conformity assessments, and other stand-alone systems that have implications for fundamental rights. Both categories of high-risk algorithmic systems can be deployed and used on the European market if they meet certain requirements. In particular, designers and developers of these systems must:

  • Establish a “risk management system”: This risk management system must operate throughout the entire lifecycle of an algorithmic system and enable developers and deployers to understand the potential risks a system can pose when it is used in its intended manner and when it is foreseeably misused. 
  • Apply data governance and management practices and ensure accuracy and robustness: Each algorithmic system’s training, validation, and testing data will be subject to these practices, which are related to factors such as data collection, “data preparation processing operations” (e.g. data labeling and data cleaning), and evaluations of bias and gaps in data. Notably, failure to comply with these requirements could result in fines of up to 6% of the company’s global annual turnover. This section of the regulation also notes that datasets should be “relevant, representative, free of errors and complete”, which may be too aspirational an expectation. High-risk AI systems must also be developed to provide an appropriate level of “accuracy, robustness, and cybersecurity” throughout their life cycles. This includes instituting mitigation measures for systems that continuously learn and could generate biased outputs based on feedback loops. 
  • Engage in technical documentation, record-keeping, and oversight: In particular, developers must produce technical documentation before a high-risk system is placed on the market or deployed, and continuously update this information. Developers must also design these systems to enable the automatic recording of events (“logs”) when the system is operating. These functions must enable traceability throughout the AI system’s life cycle, and ensure the system can be monitored in situations where it could pose certain risks. Further, these systems must be designed in a manner that enables human oversight while they are in use. Human oversight should primarily aim to prevent or mitigate any risks related to health, safety, or fundamental rights. 
  • Provide adequate transparency to users: This includes information about the system’s performance characteristics, capabilities, limitations, and outputs. In addition, users must have access to information on situations in which the high-risk AI system could yield harm to fundamental rights, and on how the system performs on the group of users it is intended for. 
  • Monitor high-risk AI systems after deployment: The Act requires providers of high-risk AI systems to monitor their systems after they have been released on the market. Providers must report any serious incidents which violate safety laws or fundamental rights to the national supervisory body. In these instances, a regulator can seek access to the system’s source code and can also force the provider to withdraw the system from the market. 
  • Register the system in a Commission-maintained database: Providers of standalone high-risk AI systems must register their system in an EU-wide database before placing the AI system on the market or putting it into service. The database will consolidate information about standalone high-risk AI systems that pose threats to fundamental rights, and it will be maintained by the European Commission. This effort could help to centralize information about some high-risk AI systems, and a full accounting of all systems could be overwhelming for the Commission to manage. However, by not requiring all providers to register all of their AI systems in the database, the Commission is allowing for some self-regulation, which could be problematic.

Given that there are numerous emerging uses and applications of AI on the horizon, the regulation empowers the Commission to expand the list of high-risk AI systems in certain cases, by using a predetermined set of criteria and a risk assessment framework. In addition, according to the Commission, the draft provisions related to high-risk AI systems are in line with many international recommendations and principles related to algorithmic transparency and accountability, and many “state-of-the-art” operators of algorithmic systems already comply with many of these guidelines, so the regulation should be fairly seamless to implement. 

Potentially manipulative algorithmic systems

The draft regulation also features transparency obligations for AI systems that could pose specific risks related to manipulation. These include systems that interact with humans, use emotion recognition or biometric categorization systems, and produce or manipulate content, including deep fakes. According to the regulation, when a user interacts with an AI system that is monitoring and detecting their emotions or other personal characteristics using automated mechanisms, they must be informed. In addition, if an AI system is generating or manipulating content that is made to appear authentic, the provider of the AI system has a responsibility to disclose that the content was produced artificially. These provisions are particularly relevant given growing concerns among advocates that companies are developing emotion recognition tools, and that users lack the necessary skills to identify authentic content. 

Omissions and concerns

In its current form, the regulation includes requirements that training, validation, and testing data must be subject to data governance and management standards. In addition, the proposal notes that in cases where algorithmic systems are trained using personal data, the EU’s data protection and privacy regulation, the General Data Protection Regulation (GDPR) applies. However, the regulation does not feature substantive provisions that regulate how providers of high-risk AI systems train their systems. How an algorithmic system is trained significantly influences how, and how well, it will operate. The Commission's decision to omit requirements regarding the training process is therefore concerning.

In addition, the primary method the regulation uses to obtain accountability from AI providers is through conformity assessments. However, the proposed structure for these assessments is weak. As laid out in the annex of the draft regulation, a conformity assessment is essentially an internal evaluation procedure a provider engages in to ensure it is in line with the technical documentation and quality management expectations set forth in the regulation. Providers are not required to provide any form of external accountability, as they do not need to share findings from the assessments with the public or the regulator. Rather, providers must simply indicate to the regulator that their systems conform with the standards set forth in the regulation. In this way, the conformity assessment process essentially allows providers to grade their own homework and determine whether they are in line with the standards set forth in the regulation.

The draft regulation hints at other forms of assessments that providers can conduct, such as bias assessments and other forms of impact assessments. However, the draft does not provide concrete guidance on how these mechanisms should be structured and implemented. Many of the transparency and accountability provisions for high-risk AI systems are also vaguely worded, which could provide developers with broad flexibility to determine how they comply with these requirements, therefore further limiting the regulation’s overall value as an accountability mechanism.

In the draft law, the Commission also notes that it has the flexibility to expand the list of AI systems it considers high risk. However, the regulation does not clarify whether the Commission has the same level of flexibility to expand the list of prohibited AI systems. This is something that needs to be clarified in future deliberations around the draft law, as the use cases of algorithmic systems will continue to grow in the future and there are likely additional dangerous systems and use cases that we do not currently foresee 

Finally, despite the European Commission’s vocal opposition to big internet platforms and their untethered use of algorithmic systems, this proposed AI regulation does little to rein in these companies’ uses of AI tools. For the most part, algorithms like social media curation and app store algorithms would fall out of scope. As some experts have noted, algorithms used in ad targeting and delivery and recommendation systems could be considered manipulative or exploitative practices that could be prohibited, however, the regulator would need to firmly make this determination. 

Final thoughts

As the first proposed legal framework for reigning in the risks posed by artificial intelligence, the European Commission’s draft AI regulation is poised to have significant influence over the global debate around algorithmic accountability. The current draft includes some positive provisions around harmful and high-risk algorithmic systems. However, it also features numerous gaps that, if not addressed, stand to limit the effectiveness of the proposed regulation. 

The draft regulation is also an important signal that regulation of algorithmic systems is top of mind for EU policymakers, and that legislators in other nations—especially those that are home to many large developers of AI systems, like the United States—need to follow suit.

Related Topics
Algorithmic Decision-Making Platform Accountability