By Henrik Skaug Sætra, Mark Coeckelbergh, John Danaher
Communications of the ACM,
Vol. 66 No. 1, Pages 39-41
Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech.
The choice between these two strategies gives rise to an ethical dilemma. For example, if the ethicist's research emphasized the grave and unfortunate consequences of Twitter and Facebook, should they promote this research by building communities on said networks? Should they take funding from Big Tech to promote the reform of Big Tech? Should they seek opportunities at Google or OpenAI if they are deeply concerned about the negative implications of large-scale language models?
This is a good article and I commend the authors for digging into the topic. I would like to suggest that there is another approach that should be considered to the dilemma of ethics in the pursuit of a career in the many facets of the tech industry - professional licensing.
Many have argued that we license too many professions. But, in fields where issues of competency and/or ethical conduct have the potential to cause significant harm to either the public at large, or to specific customers, two courses of action have generally followed. First, either a professional society or some standards or regulatory body has generated a code of professional standards including an ethical standard. Then either the professional society in a teaming relationship with government, or the government acting on its own, has established a licensing regime. The ACM has accomplished step one. It may be time to take the second step with the ACM leading the way, rather than waiting for government to take the lead.
One cannot help but be reminded of the twin tragedies in the construction of the Quebec Bridge which led to the licensing of professional engineers. The first collapse was on August 29, 1907 and either 33 or 35 people lost their lives. However, it would not be until the second collapse on September 11, 1916 which killed another 13 people that the recommendations coming out of the first tragedy would gain sufficient traction to actually do something. For many years afterward as a part of the recognition ceremony for newly minted licensed engineers, they would be given a pin crafted from a bit of steel taken from the scrap of the collapsed bridge.
We can already point to a number of significant public harm disasters that are directly attributable to both competency issues with the design of automated systems (e.g. the Boeing 737 crashes) or their operation (e.g. the many documented cases of harm to children and social stability that have been caused by social media ad targeting). Thus, we are faced with an interesting question. How much more blood and damage must accumulate on the hands of our "professionals" before we take the next step and move into a licensing regime with an ethics standard that has teeth in it?
Displaying 1 comment
Log in to Read the Full Article
Purchase the Article
Create a Web Account
If you are an ACM member, Communications subscriber, Digital Library subscriber, or use your institution's subscription, please set up a web account to access premium content and site
features. If you are a SIG member or member of the general public, you may set up a web account to comment on free articles and sign up for email alerts.