BLOG@CACM
Artificial Intelligence and Machine Learning

Safe AI in Education Needs You

Posted

Interest and investment in AI for Education is accelerating. So is concern about the issues that will arise when AI is widely implemented in educational technologies — such as bias, fairness, and data security. With our team at the Center for Integrative Research in the Computing and Learning Sciences (CIRCLS), we see that organizations around the world—like UNESCO or the new EdSafe AI Alliance—are organizing people to tackle the issues. In the US, organizations like Stanford’s HAI are addressing the issues of AI in healthcare, but not so much in education. Over the past year, my colleagues organized a working group in AI and education policy. Read this blog to learn why and how you should be involved, too. No matter where you live, AI is coming to your learners and will raise challenging issues that need experts like you.

As a working definition, participants considered AI to be "any computational method that is made to act independently towards a goal based on inferences from theory or patterns in data." As readers may know, AI and machine learning has been used in education to personalize student assignments, in early warning systems, to grade essays, and more. The National Science Foundation recently invested $60 million in three institutes to explore new applications, like AI to make group learning more effective, AI in support of narrative-centered-learning, and AI to accelerate adult learning. We expect uses of AI to rapidly expand in the next few years.

Our group met biweekly from March-June, 2021 with a total of 49 participants from academia, industry, and K-12 education. Participants compiled an AI and Education Policy Reading List, which is a great starting point. Participants were not policy experts and so a big question was: what can policy do in the realm of AI for education?

Here are questions our group considered: How can policy increase equitable access to AI tools and curricula across U.S. schools? How can policies bolster AI literacy in educators and leaders? During product design, limited AI knowledge from management, customer-facing personnel, and others in a company can also pose a threat to creating ethical AI. As products are used, low AI knowledge can contribute to poor instructional tool selection, misuse in implementation, and overlooking student needs for AI education. How can policies increase accountability, for example, when AI recommends decisions that are consequential for a student’s opportunities in school? Finally, how can policies encourage clearer explanations of AI tools? Without clarity, educators may be confused about their roles in making decisions for their students, may dissociate themselves from end impacts, and tools could be misused and mismarketed.

Four modes of policy change, operating at different levels and degrees of enforcement, could contribute to AI for social good and each has ways for interested CACM readers to get involved.

Policy Mode 1: Guidelines. Agencies are working on guidelines for ethical AI. While these are non-binding and unenforceable, they can set the standard for policies down the line. If you want to get involved with setting guidelines, you could reach out to organizations listed below or start something similar in your own commuity:

  • AI4K12 Initiative is fostering a community of stakeholders to build capacity for curriculum planning on a state level, as well as develop national guidelines for teaching AI and curate AI resources for K-12 teachers.

  • International Ethical Framework for AI in Education, released in March 2021, sets 8 objectives for ethical AI development and was created through a series of discussions with experts and round tables with young people.
  • National Education Technology Plan (NETP) from the U.S. Department of Education, Office of Educational Technology  is released about every four years to set priorities and describe issues around implementing educational technology nationally. The U.S. government develops NETP with input from educators, technical experts, and the public.

Policy Mode 2: Regulation. All levels of government have the power to implement policies, laws, bills, and regulations that are binding, for example, on how student data must be protected and how students with varied needs must be supported. Even without a formal career in government or law, you can get involved in policy making. Raise the issues at a local ACM chapter meeting or join a national organization (such as ISTE), or connect with a grassroots group in your community. Examples of policies appear below:

Policy Mode 3: Local School Involvement

School districts make purchasing decisions, educate their decision makers, often have chief technology officers, and curate resources for smoother AI adaptation. Similar needs arise in higher education, as well. Schools are in a particularly good position to address AI literacy by creating teacher professional development requirements. Schools also play a critical role in vetting technology. As a researcher, parent, member of your community, you can get involved in your district. You can inform by making presentations to local groups of educators or school boards. As a researcher, you can communicate your work in low jargon and action-oriented language to K-12 stakeholders through teachers journals and online sites or work on partnering with schools in your research projects. You can also work with associations of school leaders—who need experts to address these new frontiers. For example, schools and districts utilize resource from organizations like ISTE’s Standards for Students, DigCitCommit Coalition’s five digital citizenship competencies: inclusive, informed, engaged, balanced, and alert CSTA. Schools can pay more attention to purchasing or technology selection criteria such as guidelines recommended by The Block Uncarved.

Policy Mode 4: Pledges

NGO’s, non-profits, and other free-standing organizations can create pledges and product certifications that ask signers to commit to certain standards. Pledges can help set a baseline for what is or is not acceptable in a certain field while product certifications take it a step further by requiring evidence of compliance. Pledges and product certifications also offer plenty of opportunities for involvement. If you see a gap that pledges do not cover, you could reach out with suggestions or even create your own pledge to pitch to organizations that can formalize and follow through. Some sample pledges are:

  • Project Unicorn’s Data Interoperability Pledge for Vendors and School Networks is aimed at increasing clarity of data usage for all stakeholders involved. This pledge is not enforceable but provides certain incentives for signers.
  • The Student Privacy Pledge is aimed at vendors, calling for trust and transparency when collecting and using private student data. This pledge is voluntary but legally enforceable.
  • EdTech Equity Project and Digital Promise are partnering on an EdTech Equity Product Certification that will require companies to demonstrate equity in all cycles of AI tool development. This tool is not legally enforceable but will signal to potential users what equity considerations are embedded in an AI tool.

Overall, the industry pace is accelerating: we will increasingly see AI incorporated into products that affect the learning opportunities for students. The issues are difficult and require the kinds of technical expertise in this ACM community. We call for more ACM members to get involved: this community can support quicker and smarter policy implementation by bringing our depth of expertise to making more ethical AI policies for education.

 

Acknowledgements: Thank you to Joseph Fatheree, Kasey Van Ostrand, Amy Eguchi, Marlo Barnett, and Trisha Callella for introducing many of the above policies/regulations/pledges to the AI & Ed Policy group! This blog is based upon work supported by the NSF #2021159. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of NSF.

Leah Friedman is a staff researcher at the University of Pittsburgh and the Center for Integrative Research in the Computing and Learning Sciences. Nancye Blair Black is founder and CEO of The Block Uncarved, an international speaker and author, project Lead for ISTE's AI Explorations program, and completing her doctoral degree at Teachers College, Columbia University. Find her on Twitter. Erin Walker is an associate professor at the University of Pittsburgh with a joint appointment in the School of Computing and Information and the Learning Research and Development Center. Jeremy Roschelle is Executive Director of Learning Sciences Research at Digital Promise and a Fellow of the International Society of the Learning Sciences.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More