Ethics and technology have always been tightly interwoven, but as artificial intelligence (AI) marches forward and impacts society in new and novel ways, the stakes—and repercussions—are growing.
"There is potential for (AI) to be used in ways that society disapproves of," observes David S. Touretzky, a research professor in the computer science department at Carnegie Mellon University.
One idea that's gaining momentum is AI ethics instruction in schools. Groups such as AI4K12 and the MIT Media Lab have begun to study the issue and develop AI learning frameworks for K-12 students. In many cases, the materials approach the topic in broad and holistic ways. At the same time, many universities are expanding and strengthening their existing AI ethics content.
The goal? "We need to teach our children to understand the decisions they make (related to AI) and how they impact society," says Tom Yeh, assistant professor of computer science at the University of Colorado, Boulder. "Do we really want to wait until they enter college or the workforce? We are seeing an explosion of technology in our lives and there needs to be more thought about how it's designed and used."
There is a growing consensus that AI ethics instruction is critical, and it must extend beyond computer sciences courses. "We need to have conversations about the technologies we want to build, and how we as a society are responsible for ensuring that technologies reflect our values, and that they do good for the world," notes Gina Neff, a senior research fellow at the Oxford University Internet Institute and an associate professor in the Department of Sociology at the University of Oxford.
Traditionally, ethical frameworks have come from groups like ACM, which in 2018 released its own revamped Code of Ethics and Professional Conduct. Although many universities include some computer ethics instruction in computer science courses, these modules have mostly been aimed at graduate-level students.
Natalie Garrett, a Ph.D. student and researcher at the University of Colorado, Boulder, is among those promoting a broader framework. With support from the Mozilla Responsible Computer Science Challenge, Garrett works with professors to revamp existing coursework to shine more light on ethical issues. For example, one area of interest is adding ethics problems to coding assignments. "We want to help educators identify what topics should be taught and create scaffolding for developing future AI ethics education at the university level," she explains.
Ethics instruction is also filtering into primary and secondary schools—and expanding beyond technical issues such as bias, fairness, and privacy. For example, AI4K12 is developing guidelines for AI education, along with tools and resources for students in kindergarten through grade 12. The group is focused on a diverse array of issues, including how computers perceive the world using sensors, how devices learn, and how people interact with systems.
AI4K12 doesn't develop specific curricula; it leaves that task to others. "The guidelines call for students to be introduced to ethical issues relating to the use of AI technology, especially the notions of fairness, transparency/explainability, trustworthiness, and accountability," says Carnegie Mellon's Touretzky, founder and director of the initiative.
Meanwhile, the Massachusetts Institute of Technology Media Lab has created AI + Ethics Curriculum for Middle School, a program focused on developing open-source curricula through lessons and activities. The program has been piloted at Montour Public Schools near Pittsburgh, PA, a school district that also offers a three-day AI course for elementary students that also touches on ethics issues.
Others, such as Microsoft, Google, nonprofit Code.org, and the U.S. National Science Foundation (NSF) are supporting efforts to develop AI learning programs, typically with elements focusing on AI ethics. "It's important to start with children when they are very young but also offer learning that's appropriate for where they are at in the instructional cycle," says Yeh, who collaborates with the University of Colorado, Boulder's STEM outreach organization to develop AI ethics content, supported by a grant from NSF's STEM+C program.
These programs address a broad swath of technology and AI issues, particularly revolving around the public good and how to create systems that operate in a fair and non-discriminatory manner. "A primary goal," Neff says, "is to build AI systems that do not create intrinsic advantages and disadvantages for different groups and users."
AI ethics instruction typically addresses a range of topics, including public policy, data sourcing, data quality, accountability, explainability and transparency. Touretzky points out that establishing clearly defined standards is tricky. "''Fair' is harder than it sounds," he says. For example, "When making decisions like who gets a loan, there are multiple definitions of 'fair' that all seem reasonable, but are mutually incompatible." He also believes it's critical to stay focused on broader issues and avoid micromanaging AI decision making.
Ultimately, Yeh says, ethics education must address broader issues that transcend AI—and it must be woven into all computing science instruction, rather than residing in specialized coursework. "People must balance a desire to make money with doing things that benefit society, and we must ensure that everyone has an opportunity to learn about AI and ethics in school. The end goal is to elevate human values, rather than devalue them."
Samuel Greengard is an author and journalist based in West Linn, OR, USA.