News
Artificial Intelligence and Machine Learning

AI For the Common Good

Posted
AI for the Common Good, illustration

How can artificial intelligence (AI) contribute to the 17 Sustainable Development Goals that the U.N. has selected to end poverty, protect the planet, and ensure prosperity for all?

That was the question at the heart of the AI for Good Global Summit organized by the ITU and the XPRIZE Foundation, which took place June 7-9 in Geneva, Switzerland.

Over those three days, some 500 speakers and attendees from the worlds of science and business, from governments and non-profit organizations, discussed how everybody in the world can benefit from AI; not just the wealthy and healthy, but also the three billion living in poverty, and the 1 billion living with some form of disability.

"The measure of success for AI applications is the value they create for human lives," observed ACM president Vicki L. Hanson during her keynote presentation.

As AI is powered by data, important considerations underlying any discussion of how it can benefit mankind include privacy and security.

"As innovation is going rapidly, government agencies and international bodies are not as quick as private enterprises in establishing and implementing standards," said Hongjiang Zhang, head of ByteDance Technical Strategy Research Center in China. "Therefore, private enterprises should show their social responsibility and be leading in protecting privacy and security. In this way, they can win the trust of customers and ultimately unleash the full potential of AI technology."

Virginia Dignum, executive director of the Delft Design for Values Institute at Delft University of Technology in the Netherlands, formulated three principles on which AI development should be based: accountability, responsibility, and transparency (ART for short). She explained,

"According to the first of the ART principles, systems should be accountable. Why do they decide to take this or that step? Why they have used this type of data and for what?

"Second, people hold responsibility for systems. How are we managing and governing the data? Who has access to the data? Who doesn't have access to the data? We should create principles around the responsibility for good, sound and valuable stewardship of data and algorithms.

"The third and final of the ART principles holds that systems should be transparent. People should be able to inspect the outcomes and verify the functioning of the algorithms."

A lively panel discussion touched on a multitude of other challenges, such as building privacy by design, the blurring of sensitive and non-sensitive data, and the challenge to have informed consent on the use of personal data when there is so much passive surveillance by cameras and online trackers. Dignum stressed the need to have "the human in the loop of AI systems."

Sean McGregor, technical manager of the XPRIZE Foundation, had the difficult task of distilling some actionable recommendations from the discussion. Luckily, one recommendation was largely universal in the discussion: to assign, identify, or convene a world governance body to lead or coordinate standards, guidelines, or regulations for AI as it relates to privacy and security issues.

The proposal to establish a world governance body came up in many discussions during the summit. Gary Marcus, professor of psychology and neural science at New York University, even called in his keynote lecture for oversight of AI based on an organization modeled on CERN, the European Organization for Nuclear Research, "a global collaboration with thousands of researchers from over 20 countries, working together, in common cause, building technology and science that could never be constructed in individual labs, tackling problems that industry might otherwise neglect.

"Maybe we need to have a model like that for AI: global collaboration, lots of people doing AI for the common good."

A second recommendation McGregor offered from the privacy and security discussion was to create model laws concerning data privacy and security, and to encourage governments around the world to adopt such laws.

McGregor's final recommendation based on the discussions was to create public good by investing in win-win technologies, technologies that enable privacy and security without reducing the strength of AI systems, such as multi-party computation, homomorphic computation, and differential privacy.

Overall, the Summit made clear that it is too early to formulate definitive guiding principles for AI that have the highest chance to make the world a better place. The event generated a lot of healthy food for thought, which will be followed up on by smaller working groups whose task will be to digest the discussions and turn them into the first U.N.-backed AI guiding principles.

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More