Chinese Interests Take a Big Seat at the AI Governance Table

Government and Industry Team to Shape Emerging AI Standards-Setting Process
Blog Post
Shutterstock/ Profit_Image
June 20, 2018

This analysis by Jeffrey Ding, Paul Triolo, and Samm Sacks is accompanied by translated excerpts of the Chinese government’s White Paper on Artificial Intelligence Standardization, available here.

Introduction

Last summer the Chinese government released its ambitious New Generation Artificial Intelligence Development Plan (AIDP), which set the eye-catching target of national leadership in a variety of AI fields by 2030. The plan matters not only because of what it says about China’s technological ambitions, but also for its plans to shape AI governance and policy. Part of the plan’s approach is to devote considerable effort to standards-setting processes in AI-driven sectors. This means writing guidelines not only for key technologies and interoperability, but also for the ethical and security issues that arise across an AI-enabled ecosystem, from algorithmic transparency to liability, bias, and privacy.

This year Chinese organizations took a major step toward putting these aspirations into action by releasing an in-depth white paper on AI standards in January and hosting a major international AI standards meeting in Beijing in April. These developments mark Beijing’s first stake in the ground as a leader in developing AI policy and in working with international bodies, even as many governments and companies around the world grapple with uncharted territory in writing the rules on AI. China is eager to participate in international standards-setting bodies on the question of whether and how to set standards around controversial aspects of AI, such as algorithmic bias and transparency in algorithmic decision making.

At minimum, these efforts will significantly shape AI fields within China, where development and deployment of AI is at the center of wide-ranging public- and private-sector efforts. But standards setting globally is just getting started for AI, and there is no consensus on what aspects of AI require a standards-based approach.

Taking on the challenge of developing AI standards is the new SC 42 (Subcommittee 42), which sits under Joint Technical Committee 1 (JTC 1), in turn constituted by two widely respected standards bodies, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). (Its full ISO abbreviated name is thus ISO/IEC JTC 1/SC 42.) Established in October 2017, SC 42 includes participants from many countries and was the host for the April meeting in Beijing.

Realizing that China’s many large companies are increasingly global players, and Chinese-developed AI algorithms will have effects on users outside of China, China’s government aims to advance global efforts to set standards around ethical and social issues related to AI algorithm deployment. Should Chinese officials and experts succeed in influencing such standards and related AI governance discussions, the policy landscape may skew toward the interests of government-driven technical organizations, attenuating the voices of independent civil society actors that inform the debate in North America and Europe.

What standards mean in China and why Beijing cares about helping write international standards on AI

Over the past two years, the Chinese government has cranked out dozens of information and communications technology (ICT) standards, particularly in cybersecurity and digital economy domains. The government uses standards both as national policy tools and as potentially international protocols or guidelines for design and interoperability.

Domestic policy–oriented standards act more like a form of regulation, spelling out requirements that companies can be audited against even if they are not formally binding. They often flesh out the details of higher-level laws. Internationally, meanwhile, the government has stressed the importance of China playing a leadership role in writing global standards, both for economic reasons and because of the national prestige associated with having what is referred to as a “right to speak” and a seat at the table in global forums. Both drives are alive in January’s white paper, which is discussed in greater depth below.

Drivers for Chinese government standardization efforts

In particular, the Chinese government views standards as playing a significant role in the country’s aspirations for AI leadership. There are a number of different drivers behind this push.

First, the government hopes that its role in standardization will generate more value out of AI technologies by facilitating data pooling and improving the interoperability of systems. The importance of standards in spurring economic development, particularly for ICTs, is pervasive in Chinese policy and industry circles. According to a popular saying, “First-tier companies make standards, second-tier companies make technology, and third-tier companies make products (一流的企业做标准,二流的企业做技术,三流的企业做产品).”  

Second, setting standards may strengthen the commercial competitiveness of Chinese companies globally. This is because technical standards included as part of a technology stack, such as for 5G next generation mobile, include essential patents, and companies that contribute intellectual property to the overall system receive royalties when other companies build equipment using their patents. One of the AIDP’s near-term goals, targeting 2020, states: “The AI industry’s competitiveness should have entered the first echelon internationally. China should have established initial AI technology standards, service systems, and industrial ecological system chains. It should have cultivated a number of the world's leading AI backbone enterprises.” The AIDP was thus in part a standards-centered policy: The Chinese word for standards (标准) appears 24 times in the AIDP; by comparison, the Chinese word for policy (政策) appears 26 times. Noting that the United States, the European Union, and Japan have all put forward policies related to AI standardization, the authors of the white paper view standardization as a crucial element in “seizing a new round of technology dominance” and ensuring the competitiveness of Chinese AI products and services in the international market.

Yet China’s prioritization of technical standards in AI policy is not solely motivated by economic gains. Developing standards that improve the quality of AI products and services may also reduce the risk of societal backlash to technology. Similarly, specifying methods for testing and assessing facial recognition systems or service robots to prevent high-profile accidents could cultivate societal trust in these new technologies. Ensuring that the advancement of AI does not disrupt societal stability is also a goal of the AIDP, which acknowledges that the government will have to deal with the social aftershocks of AI development, deepening income inequality and urban-rural disparities. Lastly, as an official Chinese readout of the SC 42 meeting in Beijing indicates, Chinese authorities view standardization efforts as a way to take a leading role in international governance on the safety and ethics of AI.

Finally, the drive to shape international standards (part of the “right to speak”) reflects long-standing concerns that Chinese representatives were not at the table to help set the rules of the game for the global Internet. The Chinese government wants to make sure that this does not happen in other ICT spheres, now that China has become a technology power with a sizeable market and leading technology companies, including in AI.

Assembling an AI standards effort

A number of different organizations within the bureaucracy have a role in the standards-setting process. The main player in ICT-related standards is the China Electronic Standardization Institute (CESI), which sits under the Ministry of Industry and Information Technology (MIIT). CESI led the effort to corral over two dozen companies, associations, and academic organizations who contributed to the AI white paper (see table below), helping solidify its role as a synthesizer across the interagency on AI standards.

Organizations Contributing to the White Paper on Artificial Intelligence Standardization
China Electronics Standardization Institute (CESI) 中国电子技术标准化研究院
Institute of Automation, Chinese Academy of Sciences 中国科学院自动化研究所
Beijing Institute of Technology 北京理工大学
Tsinghua University 清华大学
Peking University 北京大学
Renmin University of China 中国人民大学
Beihang University 北京航空航天大学
iFlytek 科大讯飞股份有限公司
Huawei 华为技术有限公司
IBM (China) 国际商业机器(中国)有限公司
Alibaba Cloud (Aliyun) 阿里云计算有限公司
Institute of Computing Technology, Chinese Academy of Sciences 中国科学院计算技术研究所
China Telecom 中国电信集团公司
Tencent 腾讯互联网加(深圳)有限公司
Alibaba 阿里巴巴网络技术有限公司
Shanghai Computer Software Technology Development Center 上海计算机软件技术开发中心
Shanghai Zhizhen Network Technology. 上海智臻智能网络科技股份有限公司
iQIYI 北京爱奇艺科技有限公司
Beijing Shengzhiguang Technology 北京有生志广科技有限公司
Jixianyuan 极限元(北京)智能科技股份有限公司
Bytedance (Toutiao) 北京字节跳动科技有限公司(今日头条)
SenseTime 北京商汤科技开发有限公司
Ant Financial 浙江蚂蚁小微金融服务集团有限公司
Baidu 百度网络技术有限公司
Intel (China) 英特尔(中国)有限公司
Panasonic (China) 松下电器(中国)有限公司
Chongqing KaiZe 重庆凯泽科技股份有限公司
Haier 海尔工业智能研究院有限公司
Cloudwalk 重庆中科云从科技有限公司
Beijing DeepGlint 北京格灵深瞳信息技术有限公司

China’s AI standards-setting efforts reveal diverging global approaches. For its part, China is aggressively pushing ahead in a “technical standards ‘going out’” partnership with domestic AI enterprises, as most of the drafters of the white paper were from the private sector.  Absent were major civil society groups of the type that are leading similar discussions in the United States, for example the Partnership on AI and its members, the AI Now Institute, and the community surrounding the Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) workshop series.

The U.S. government approach to the issue is different, with voices pushing to incorporate AI regulatory issues into existing sectoral frameworks and an emphasis on broader stakeholder engagement. Tim Day of the U.S. Chamber of Commerce’s Center for Advanced Technology and Innovation captures the sentiment of many U.S. industry leaders regarding AI standardization: “It is vital to recognize that AI is well covered by existing laws and regulators with respect to privacy, security, safety, and ethics.” Still, one of three White Papers on AI released by the Obama Administration, "The National Artificial Intelligence Research and Development Strategic Plan," emphasized the importance of standards, benchmarks, and testbeds  with the involvement of the “AI community—made up of users, industry, academia, and government.”

Chinese Moves to Assert a Role in Writing AI Standards

In 2018, the Chinese government took two important actions in its efforts to build domestic and international AI-related standards, publishing a white paper on the topic in January, and hosting an important international meeting in Beijing in April.

The Beijing meeting

In April in Beijing, a large Chinese delegation coordinated by CESI and led by CESI Vice President Sun Wenlong presented the white paper to the first meeting of SC 42. With ties to the ISO and IEC, SC 42 is positioned within a standards architecture that produces 85 percent of all international product standards. The 19 countries with full membership rights in SC 42 include many of the key players in the global AI landscape, with China, Canada, Germany, France, Russia, the United Kingdom, and the United States among them.

The Chinese delegation included both government and leading private sector companies involved in AI research and development, including large commercial firms such as Tencent, Huawei, SenseTime, and iFlytek, plus academic leaders from Peking University, Renmin University of China, the Institute of Automation at the Chinese Academy of Sciences, and Beijing University of Aeronautics and Astronautics.

The meeting established four working groups under SC 42:

  • Working Group (WG) 1: Foundational standards

  • Study Group (SG) 1: Computational approaches and characteristics of AI systems

  • SG 2: Trustworthiness

  • SG 3: Use cases and applications

The groups are to focus on developing standards for AI terminology, reference architecture, algorithm models, computational methods, security, trustworthiness, use cases, and application analysis. In addition to the fact that the first meeting was in Beijing, Chinese participation was significant. CESI official Liu Yuli served as the convener of SG 1, and Chinese representatives submitted various proposals for the committee’s work agenda, including language on neural network representations, model compressions, and knowledge maps. More importantly, JTC 1 which oversees SC 42, has indicated it will urge the ISO Technical Management Board to agree that the scope of SC 42 research should cover social issues surrounding AI, including AI self-governance, robotics, ensuring the benign nature of the industrial Internet of Things, algorithmic bias, and other issues. This wide scope of SC 42, combined with China’s strong initial role in the group, could enable Chinese actors to influence a wide range of issues related to AI standards, including ethical and social norm development.

Chinese media summaries have held up the meeting as emblematic of the country’s influence, or “right to speak” (话语权, also translated as “discursive power”), in international fora on AI issues, with one article saying: “CESI actively coordinated the participation of experts from enterprises, universities, and research institutes to form a Chinese delegation and submit international proposals to win the ‘convener’ position for China, enhancing China's international ‘right to speak’ in the field of artificial intelligence.”

At a minimum, the Beijing meeting represents a major win for the developers of the AIDP and for China’s standards bodies, which will now play a leading role in the development of international standards around AI.

The white paper

In January the Standards Administration of China (SAC) issued the “White Paper on Artificial Intelligence Standardization” to coordinate (or, in Chinese bureaucratic terms, strengthen the “top-level design”) of AI standards. Overseen by the CESI, the white paper was a joint effort by more than 30 academic and industry organizations. Along with the creation of the National Artificial Intelligence Standardization General Group and Expert Advisory Group (see Section D here), this white paper is part of the Chinese government’s effort to claim a leadership position in setting domestic and international standards for the AI-related industrial ecosystem. (Excerpts of the white paper have been translated by DigiChina and are available here.)

The paper comprehensively outlines the status of both China’s and the rest of the world’s AI standardization work, proposes a standards system that would address AI from its foundational concepts to its end-of-the-pipe applications, and lists 23 standards as urgently needed in the near term. The white paper highlights the rapid pace of development of AI and reflects the sense of urgency felt by regulators and academics in China that the government needs to be out in front in helping to shape the direction of development of a range of AI technologies and applications, where the private sector is forging ahead.  

The paper’s authors provide a framework for developing standards around AI that attempts to incorporate existing efforts on traditional technical standards broken into five layers: foundations, platform and support, key technologies, products and services, and applications. The framework also includes the category of safety (安全, also translated as “security”) and ethics as cutting across the other layers of the framework (see below, a translated version of Figure 4 from the white paper).

AI Standards Architecture

The white paper attempts to bring together all existing or planned standards that may be related to AI, and acknowledges that considerable work has already been done at more narrow technical levels in areas related to specific aspects of technologies critical to AI applications. In the white paper’s appendix, there is a list of 200 AI-related standards that have already been published, are being developed, or have been proposed for development. To date, 20 of these standards are in the process of being adopted by the ISO/IEC, though all but one of them are in biometrics, and the other is in robots.`

Substantial Chinese contributions are likely to focus on standards in the outer layers of the AI domain, particularly in the products/services and applications layers. This is because leading Chinese AI firms such as Alibaba, Tencent, Baidu, iFlytek, and SenseTime have considerable experience in using AI to solve business operational problems.

Data privacy, AI ethics a focus of white paper

Unlike traditional standards for digital video players or document formatting, standards for AI technologies—which raise unique safety, ethical, and privacy issues—must address an entire AI-enabled ecosystem. In terms of ethics, the white paper lays out four key principles: 1) the principle of human interests, 2) the principle of liability, 3) the principle of transparency, and 4) the principle of the “consistency of rights and responsibilities.” The last principle aims to balance the responsibilities of companies to ensure AI systems are transparent with the rights of companies to protect trade secrets.

The white paper’s thoughtful and at times frank discussion of data privacy standards reflects an emerging, important debate over privacy protections in China. On the one hand, there is demand from the public for restrictions on how companies collect and use personal information. These concerns are reflected in a standard for personal information security which aims to strengthen user control over how their data is handled by companies. But at the same time, the government does not want to to make the rules too strict for companies in a way that would inhibit AI development. This dynamic underscores an unresolved tension.

Indeed, the drafters of the white paper grapple with how to strike a balance, acknowledging that the definition of consent in existing data regulation does not go far enough to reflect the complexity of the concept with the development of AI technologies. The white paper calls for a new regulatory framework for “the use of AI to possibly derive information that exceeds what citizens initially consented to be disclosed.”

In some parts, the drafters include very technical and detailed language. For instance, one section of the white paper calls for changing the data representation and compression models in neural networks in order to preserve privacy while still ensuring that data can be exchanged across platforms.

The ideas put forward in the white paper are not definitive answers, but rather a proposal for addressing these new challenges being discussed widely by the government, industry, and academia. The most recent example occurred at the Global Mobile Internet Conference in Beijing in April, where Chinese and foreign experts held a roundtable devoted to “the contradiction between data sharing and privacy protection.” As these debates play out, and data governance challenges grow more complex, China’s effort to write standards for data privacy and AI mark another important signpost of China exercising its “right to speak” in shaping the rules for emerging technologies.

Given that security, ethical, and privacy risks of AI and AI-enabled technologies will only become more salient as the technology progresses, China’s attempt to set standards to address these issues will be critical to its sustainable development of AI and to the development of its data privacy, data protection, and framework for cross-border data flows.

What’s Next and What to Look For

Now that the white paper has been presented and the new SC 42 committee on AI standards has convened its first meeting, what lies ahead for China and international standards development?

According to people familiar with the process, the election of the SC 42 secretariat was hotly contested, and the American National Standards Institute (ANSI) was eventually selected. Since international standards bodies like the SC 42 often do not have sufficient staffing resources or the technical expertise to manage everything related to standards development, the secretariat plays an important guiding role. As a consolation of sorts to China, Beijing was chosen as the location for the first meeting of the SC 42 and Wael Diab, a senior director at Huawei, was selected to chair the committee.

The standards-setting process in contentious areas such as data privacy and AI is likely to be protracted, and there are likely to be major disagreements even in less controversial areas such as algorithmic transparency and accountability. Establishing technical standards in AI policy is primarily for the purpose of spurring economic development, but standards-setting also serves a wider range of strategic objectives. The acknowledgement at the first meeting that the purview of the committee should extend into controversial social and ethical issues is likely to produce serious disagreement among some of the countries party to the SC 42 process.

Standards, though typically written in dry technical language, will speak volumes about China’s AI industry and the global governance of AI. For China’s AI industry, standards will be crucial to building AI-enabled systems that are safe, trustworthy, and controllable, which is necessary for the growth of the AI industry in China and will affect the global competitiveness of Chinese tech companies. Though technological breakthroughs and the market share of different firms will propel standards development in most cases, there is also a risk that China’s assertive approach to standards-setting will result in technological lock-in and stifle competition. China’s prioritization of technical standards in AI policy demonstrates that it understands the soft benefits of being able to set the rules of the road in a strategic technology area. Tracking the progress of the SC 42 in setting standards, as well as the extent to which China can carve out a “right to speak” in international standards bodies will be an important indicator for how AI technologies will be governed internationally.