The U.S. is the world leader in artificial intelligence (AI), dominating the field since its inception in 1956 at the Dartmouth Summer Research Project on AI. Today IBM's Watson, conceived at the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) Text Retrieval Conference, typifies the state of the art in AI.
Continuing in that tradition, NIST in August released U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools. The plan's aim is to ensure U.S. dominance in AI continues, even in the face of billion-dollar foreign nation-state budgets aimed at surpassing the U.S.
"Research dollar amounts are not the measure of dominance," said Elham Tabassi, acting chief of staff at NIST's Information Technology Laboratory. "Our plan for U.S. leadership in AI calls on all U.S. government agencies, such as the National Science Foundation [NSF], the Defense Advanced Research Project Agency [DARPA], and the National Laboratories to work together with academia and industry to promote research that harmonizes with standards that are technically sound and testable."
The White House defines AI policy from guidelines provided to it by the National Science and Technology Council (NSTC), which has multiple subcommittees for AI, including a coordinator specifically charged with harmonizing NSTC policies with NIST's Plan. Those bodies, in turn, depend on input from academia, industry, and independent think tanks that attempt to level the playing field in the face of the partisanship stakeholders sometimes exhibit.
The Center for Data Innovation think tank, for instance, had been furrowing the ground with its report How Policymakers Can Foster Algorithmic Accountability for over a year before NIST began sowing the seeds for U.S. continued dominance in AI. The Center's report was released in May 2018, whereas NIST's Federal Engagement in Artificial Intelligence Standards Workshop, where its Plan was born, took place in May 2019. According to Center senior policy analyst Joshua New (who co-authored its report), one of the Center's primary recommendations for algorithmic accountability in AI is to "strengthen U.S. leadership in developing AI standards, and in encouraging their broad adoption."
"Our mission is to advance innovation. We are located in Washington D.C. but we want to make everyone better worldwide. By aligning with NIST's call for open standards we hope to help the U.S. stay ahead, but we also believe that standards will be a boon for everyone. Of course, the influence that NIST's support will have on U.S. regulatory bodies will definitely have a positive impact U.S competitiveness," said New.
The Center had planned its own workshop for May 2019, but after learning of NIST's planned workshop, it incorporated its agenda into that of NIST. As a result, the Center was responsible for attracting luminaries to the NIST workshop, including White House Office of Science and Technology Policy assistant director for Artificial Intelligence Lynne Parker, Microsoft Corporate Standards Group general manager Jason Matusow, and Nvidia North America Public Sector vice president Anthony Robbins.
"Standards propel the realization of fairness, transparency, explainability, and even criminal justice. Standards also encourage the elimination of discrimination and bias," said New. "We can't make progress on these issues without standards—at the very least, everybody has to be using the same standard definitions of their terminology."
NIST's workshop began the standardization process, but the Plan itself was actually prepared in direct response to Presidential Executive Order #13859, which calls for "Continued American leadership in AI [which] is of paramount importance to maintaining the economic- and national- security of the United States, and to shaping the global evolution of AI in a manner consistent with our Nation's values, policies, and priorities."
According to Tabassi, NIST's "standards will promote an interoperable marketplace which will ensure wide adoption of AI, as well as promote AI development and innovation by specifying and establishing the 'rules of the game'—for example, using the same vocabulary, establishing a set of best practices, and by following uniform guidelines that make sure AI meets U.S. requirements, especially for trustworthiness."
The AI software and hardware interoperability that results from well-defined standards forms the basis for fast innovation in the U.S., according to NIST, and the inclusion of trustworthiness will encourage public respect of the decisions to which AI makes a contribution.
NIST, however, does not plan on actually inventing standards; rather, its Plan calls for promoting those standards which enable interoperability and inspire trust.
"The standards themselves are developed by independent organizations," said Tabassi.
The NIST Plan lists the current AI standardization efforts that will be most important moving forward, which include those of the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), the Institute of Electrical and Electronics Engineers (IEEE), the American Society for Testing and Materials (ASTM), the Consumer Technology Association (CTA), the International Telecommunication Union, Telecommunication standardization sector (ITU-T), the Object Management Group (OMG), the Society of Automotive Engineering International (SAE International), the U.S. Department of Transportation (DOT), and the World Wide Web Consortium (W3C).
Of these ongoing efforts, NIST's Plan intends to promote those research and development standards that are not only interoperable, but which are "understandable by non-experts and easily testable" said Tabassi. "We also want to make sure that the world's best AI educations remain in U.S. academic and industrial training centers."
A final goal that NIST is promoting is regulatory- and procurement-friendly AI standards that are flexible enough to keep up with the pace of innovation.
"The whole area of AI is very fast-paced. As such, it is important to make sure AI standards don't inhibit innovation," said Tabassi. "Our concept of 'flexibility' addresses that need to keep up, but always on solid technical grounds."
Most importantly, as expressed in the first paragraph of the NIST Plan's description of the Executive Order, is "ensuring that technical standards...reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies...by developing international standards that promote and protect those priorities."
Trustworthiness is the most important expression of those Executive Order's principles, according to Tabassi. Trustworthiness is defined in NIST's Plan as an ethical issue that makes sure AIs are not "black boxes," but rather include detailed explanations backed up by tangible "paper trails" that accurately document the implicit inferences from which AI recommendations are made, and which verify their objective validity in the real world. NIST's Plan shies away from using the term "fairness," which it considers a loaded term that evokes political correctness, while "trustworthiness" evokes ethical considerations that can be objectively measured, according to Tabassi.
"We still do not know how to create constraints that are 'fair' to all parties. However, trustworthiness can be objectively defined," according to Tabassi.
Executive Order #13859 gave NIST 180 days to complete its Plan, citing the urgency of mitigating U.S vulnerability to attacks from malicious actors. In this short time, it conducted its workshop and subsequently received suggestions from 40 respected organizations in industry, academia ,and government. From these, NIST drafted nine areas and eight tools that need to be standardized to meet the goals of the Executive Order, and is currently seeking to refine and prioritize them by seeking further public, industry, academic and governmental opinions.
The areas of focus are: concepts and terminology; data and knowledge; human interactions; metrics; networking; performance testing and reporting methodologies; safety; risk management, and trustworthiness.
Regarding tools, NIST's Plan calls for: data sets in standardized formats, including metadata for training, validation and testing of AI systems; tools for capturing and representing knowledge, and reasoning in AI systems; fully documented use cases that provide a range of data and information about specific applications of AI technologies and any standards or best practice guides used in making decisions about deployment of these applications; testing methodologies to validate and evaluate AI technologies' performance; metrics to quantifiably measure and characterize AI technologies; benchmarks, evaluations, and challenging problems to drive innovation; AI testbeds; tools for accountability and auditing.
NIST also plans more workshops in pursuit of refining and actualizing the continued U.S. dominance in AI.
R. Colin Johnson is a Kyoto Prize Fellow who has worked as a technology journalist for two decades.