Artificial Intelligence and Machine Learning BLOG@CACM

Making AI Fair, and How to Use It

Marc Rotenberg looks at how an early AI study led to the 1974 Privacy Act, while Jeremy Roschelle considers different aspects of human-centric AI.

Posted
Marc Rotenberg and Jeremy Roschelle

http://bit.ly/3sQQ67C October 12, 2022

A new technology, broadly deployed, raises profound questions about its impact on American society. Government agencies wonder whether this technology should be used to make automated decisions about Americans. News reports document mismanagement and abuse. Academic experts call attention to concerns about fairness and accountability. Congressional hearings are held. A federal agency undertakes a comprehensive review. Scientific experts are consulted. Comments from the public are requested. A White House press conference is announced. A detailed report is released; its centerpiece is five principles to govern the new technology.

The year is 1973, and the report “Records, Computers, and the Rights of Citizens” (http://bit.ly/3FAARqY) provides the foundation for modern privacy law. The report sets out five pillars for the management of information systems that come to be known as “Fair Information Practices” (http://bit.ly/3sUPsG9). The report will lead to the passage of the 1974 Privacy Act, the most comprehensive U.S. privacy law ever enacted. To this day, Fair Information Practices, developed by a commission led by computer scientist Willis Ware, remain the most influential conceptions of privacy protection.

Fast-forward 50 years: The “Blueprint for an AI Bill of Rights” (http://bit.ly/3WjAW8D) is announced by the Office of Science and Technology Policy. The 2022 report marks a turning point in U.S. AI policy, and like the 1973 report, follows a familiar trajectory. That is too soon to assess, but many criticisms are far off the mark. Like the “Rights of Citizens” report, the AI Bill of Rights set out no new rights. And like the 1973 report, the recommendations in the Blueprint require action by others. The most remarkable parallel is the five principles at the center of both reports. The Rights of Citizens report set out the Fair Information Practices:

  1. There must be no personal data record-keeping systems whose very existence is secret.
  2. There must be a way for a person to find out what information about the person is in a record and how it is used.
  3. There must be a way for a person to prevent information about the person obtained for one purpose from being used or made available for other purposes without the person’s consent.
  4. There must be a way for a person to correct or amend a record of identifiable information about the person.
  5. Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data.

The 2022 Blueprint stated:

  • Safety and Security—You should be protected from unsafe and ineffective systems.
  • Fairness and Equity—You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
  • Data Protection and Privacy by Design—You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
  • Transparency and Explainability—You should know an automated system is being used, and understand how and why it contributes to outcomes that impact you.
  • Accountability and Human decision-making—You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The Fair Information Practices allocated rights and responsibilities in the collection and use of personal data. The 2022 Blueprint has set out “Fair AI Practices,” allocating rights and responsibilities in the development and deployment of AI systems. This could well become the foundation of AI policy in the U.S.

In the years ahead, it will be interesting to see whether the AI Bill of Rights occupies a role in American history similar to that of the 1973 “Rights of Citizens” report. At the outset, one point is certain: the similarities are striking.

Back to Top

Jeremy Roschelle: Four Conversations About Human-Centric AI

http://bit.ly/3Dsv5Fj September 14, 2022

Reflecting across many conversations in the past year, I have found four types of conversations about human-centered artificial intelligence (AI). My own work has focused on the need for policies regarding AI in education, thus I have been involved in conversations about how teachers, students, and other educational participants should be in the loop when AI is designed, evaluated, implemented, and improved. I have been in many conversations about how surveillance and bias could harm teachers or students, and I have seen wonderful things emerging that could help teachers and students. Using education as an example, I reflect on what we talk about when we talk about human-centered AI.

The four types of conversations are:

  1. Opportunities and Risks. The default conversation seems to be about the great things we might be able to do in the near future, if only we mitigate the risks. For example, teachers currently have jobs that have become too difficult. Administrative details get in the way of the more-rewarding work of interacting with students. An opportunity with AI is to provide teachers with assistants that make their jobs easier and allow them to spend their energy working directly with students. Yet with AI assistants come risks of teacher surveillance and algorithm bias. There are many other risks, too. The overall conversation is about the positive future we could have, if only we minimize the risks.
  2. Trust and Trustworthiness. A conversation about trust has a different flavor than a conversation about opportunities and risks. I find conversations get real when we ask how much we should trust an AI system to automate decisions. Educators are entrusted with the futures of today’s students. If we delegate a decision to technology, are we adequately guarding students’ futures? Likewise, in times when saying the wrong word in a classroom can result in lawsuits against a teacher, how can we be sure an AI assistant is not putting a teacher’s job at risk? Should we trust our AI systems to be free of bias? In education, public discussions of systems engineering are in their infancy. Indeed, the field of learning engineering has been emerging over the past few years. More emphasis is needed on how systems should be engineered to safeguard what we hold dear, and to earn our trust.
  3. Metaphors and Mechanisms. I have also participated in conversations that take a more critical turn by questioning the metaphors used to describe the future use of AI in a societal context. Critics may question whether explaining AI “reasoning” as “human-like” obscures important ways in which AI systems may make mistakes humans rarely make, often involving context. Metaphors may obscure the ways in which an AI system can be surprisingly brittle when conditions change. Metaphors may hide how AI systems are better at reaching goals than we are in specifying and monitoring the right goals. A companion to debunking metaphors can be digging for clear explanations of how AI actually works. For example, I’ve found experts can advance public discourse by explaining AI mechanisms in non-magical, jargon-free terms, so people can better evaluate how and why AI may make good inferences in some situations and poor-quality inferences in others. Overall, this is a conversation that helps the participants shift from perceived magic to gritty reality.
  4. Policies and Protections. My experience is that it’s difficult to get specific about policies or protections we might need for safe AI in education. I think that’s because the people who are good at talking about educational policies do not spend much time thinking about technology, and the people who are good at educational technology don’t spend much time thinking about policies. Of course, there are some existing regulations in education that relate to technology, mostly regarding data security and privacy. We can start by building on those. Yet we need to go beyond data security to address issues like bias and surveillance. I encounter people who really care about the future of AI in teaching and learning, who would like policies and protections to guide a safe future, but they are not exactly sure how to contribute to a policy conversation. I am like this too; I tend to find myself saying, “I’m not much of a policy expert.” In a safe future for AI in education, we all need to be policy experts.

When we stay within one or two of the four conversations, we limit progress toward human-centric AI. For example, the opportunities-and-risks conversation tends to be hopeful and abstract; it can appear that by naming risks, we’re making progress to mitigating them. The future may be described in attractive terms but it is always far off, and that makes the risks feel safer. A complementary conversation about metaphors and mechanisms can defuse the sense of magic and help the conversants see the devil in the details.

Likewise, building trust and engineering trustworthiness are key conversations we need to have for any field of human-centric AI. Then again, the scale and power possible through AI does not always bring out the best in people, and even when people act their best, unintended consequences arise. We have to maintain skepticism. We need to distrust AI, too. Unless we talk about policies and protections, we are not engaging our rights as humans to self-govern our future reality. Not everything that will be available will be trustworthy, and we have to create governance mechanisms to keep harm at bay.

I believe it is important to notice the four kinds of conversations and use them to achieve well-rounded overall discussions about human-centric AI. I would welcome your thoughts on the kinds of conversations you observe when people talk about human-centered AI, how typical conversations limit what we talk about, and what we can do to engage a broad diversity of people in the conversations we need to have.

 

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More