BLOG@CACM
Artificial Intelligence and Machine Learning

Time to Assess National AI Policies

Posted

The artificial intelligence (AI) ethics field is booming. According to the Council of Europe, there are now more than 300 AI policy initiatives worldwide. Professional societies such as the ACM and the IEEE have drafted frameworks, as have private companies and national governments.  Many of these guidelines set out similar goals: human-centric policies, fairness, transparency, and accountability. But little effort has been made to evaluate whether national governments have taken steps to implement AI policies.

The Center for AI and Digital Policy has undertaken the first comparative review of national AI policies. Our goal is to understand the commitments that governments have made, the AI initiatives they have launched, and the policies they have established to protect fundamental rights and to safeguard the public. Constructing the methodology for such a survey is not a simple task. A country can commit to "fairness" in AI decision-making, as many have, but to determine whether they are implementing the practice is a much harder task.

We started with widely recognized frameworks for both AI and human rights. Why human rights? We assume that governments and businesses  have incentives to track research investments and publication numbers. There are already many reports on these topics. We want to explore issues of public concern and political rights. The Universal Declaration of Human Rights, for example, provides the most well-known (and widely translated) statement of fundamental rights.

Another starting point will be the OECD/G20 AI Principles, the first global framework for AI policy. More than 50 national governments have endorsed this framework, which includes the familiar goals of fairness, accountability, and transparency. But the OECD/G20 AI framework is also incomplete. And so we look to other influential AI frameworks, such as the Universal Guidelines for AI and the Social Contract for the Age of AI, to see if countries are willing to limit controversial applications or are pursuing broader policy goals for the Age of AI.

The OECD itself recently found that many G20 countries are "moving quickly to build trustworthy AI ecosystems," but few national policies emphasize principles of robustness, safety, and accountability. And so we look to recent resolutions from the Global Privacy Assembly, the leading association of privacy experts and officials, on AI and Accountability for metrics on accountability.

We are also interested in process issues, such as whether countries have created mechanisms for public participation in the development of AI policies, as well as whether reports are publicly available. Transparency is a key goal not only for algorithms, but also for decision-making.

In our initial survey of governments, we found that some centralize AI policymaking in a single ministry or science agency, while others have several agencies with AI policymaking authority. The single agency model is likely more efficient, but a government structure that includes, for example, a data protection agency or a human rights commission, is likely to produce a national policy that better reflects public concerns. Our methodology favors the second approach.

"Algorithmic transparency" is a controversial topic in the AI policy realm. Some favor simple explainability, arguing that it is essentially impossible to prove outcomes in a machine learning environment. Others say that automated decisions, particularly for such critical decisions as hiring, credit, and criminal justice should be fully transparent. We are examining how well countries are addressing this challenge. Could a court rely on a "black box" for a criminal sentence? Some countries have said "no."

We also want to see whether countries are prepared to draw "red lines" around certain AI applications. Facial recognition, for example, is widely criticized for bias in data sets and also raises the danger of mass surveillance. And some countries have begun to score their citizens to determine their level of patriotism, while others have assigned secret risk assessment scores to travelers. Will countries limit these practices?

AI policy resonates in many domains. We don't plan to look at the impact of AI on labor markets, but we are interested in the risks of Lethal Autonomous Weapon Systems. Several countries have proposed an international treaty to ban such AI-driven weapons.

We anticipate that the 2020 AI Social Contract Index will provide a baseline for future work. We plan to update the report annually to assess  overall trends, as well as convergence and divergence among national policies. In a similar survey on global cryptography policies that we undertook many years ago, we found governments moving toward more privacy-protecting policies as they began to recognize the need for strong encryption to protect personal data and ensure network security. In another comparative study, we observed the growing divergence between the U.S. and EU approaches to online privacy during the early days of the commercial Internet, though that divergence is now diminishing.

It is too early to predict the future of AI policy, but it is important to start now. Creating an AI ethics framework is an important task, but more important will be examining the impact of these frameworks on actual practices. And there is real urgency in this work. If we are going to maintain human-centric AI policies, we need humans to actually evaluate these policies.

 

Marc Rotenberg is the founder of the newly established Center for AI and Digital Policy at the Michael Dukakis Institute. He is the editor of the AI Policy Sourcebook and served on the OECD AI Group of experts.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More