Opinion
Artificial Intelligence and Machine Learning

Artificial Intelligence: Past and Future

Posted
  1. Article
Communications Editor-in-Chief Moshe Y. Vardi

Chess fans remember many dramatic chess matches in the 20th century. I recall being transfixed by the 1972 interminable match between challenger Bobby Fischer and defending champion Boris Spassky for the World Chess Championship. The most dramatic chess match of the 20th century was, in my opinion, the May 1997 rematch between the IBM supercomputer Deep Blue and world champion Garry Kasparov, which Deep Blue won 3½–2½.

I was invited by IBM to attend the rematch. I flew to New York City to watch the first game, which Kasparov won. I was swayed by Kasparov’s confidence and decided to go back to Houston, missing the dramatic second game, in which Kasparaov lost—both the game and his confidence.

While this victory of machine over man was considered by many a triumph for artificial intelligence (AI), John McCarthy (Sept. 4, 1927–Oct. 24, 2011), who not only was one of the founding pioneers of AI but also coined the very name of the field, was rather dismissive of this accomplishment. "The fixation of most computer chess work on success in tournament play has come at scientific cost," he argued. McCarthy was disappointed by the fact that the key to Deep Blue’s success was its sheer compute power rather than a deep understanding, exhibited by expert chess players, of the game itself.

AI’s next major milestone occurred last February with IBM’s Watson program winning a "Jeopardy!" match against Brad Rutter, the biggest all-time money winner, and Ken Jennings, the record holder for the longest championship streak. This achievement was also dismissed by some. "Watson doesn’t know it won on "Jeopardy!"," argued the philosopher John Searle, asserting that "IBM invented an ingenious program, not a computer that can think."

In fact, AI has been controversial from its early days. Many of its early pioneers overpromised. "Machines will be capable, within 20 years, of doing any work a man can do," wrote Herbert Simon in 1965. At the same time, AI’s accomplishments tended to be underappreciated. "As soon as it works, no one calls it AI anymore," complained McCarthy. Yet it is recent worries about AI that indicate, I believe, how far AI as come.

In April 2000, Bill Joy, the technologists’ technologist, wrote a "heretic" article entitled "Why the Future Doesn’t Need Us" for Wired magazine, "Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species," he wrote. Joy’s article was mostly ignored, but in August 2011 Jaron Lanier, another widely respected technologist, wrote about the impact of AI on the job market. In the not-too-far future, he predicted, it would just be inconceivable to put a person behind the wheel of a truck or a cab. "What do all those people do?" he asked.

Slate magazine ran a series of articles in September 2011 titled "Will Robots Steal Your Job?" According to writer Farhad Manjoo, who detailed the many jobs we can expect to see taken over by computers and robots in the coming years, "You’re highly educated. You make a lot of money. You should still be afraid."

In fact, worries about the impact of technology on the job market are not only about the far, but also the not too far future. In a recent book, Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy, by Erik Brynjolfsson and Andrew McAfee, the authors argue that "technological progress is accelerating innovation even as it leaves many types of workers behind." Indeed, over the past 30 years, as we saw the personal computer morph into tablets, smartphones, and cloud computing, we also saw income inequality grow worldwide. While the loss of millions of jobs over the past few years has been attributed to the Great Recession, whose end is not yet in sight, it now seems that technology-driven productivity growth is at least a major factor.

The fundamental question, I believe, is whether Herbert Simon was right, even if his timing was off, when he said "Machines will be capable … of doing any work a man can do." While AI has been proven to be much more difficult than early pioneers believed, its inexorable progress over the past 50 years suggests that Simon may have been right. Bill Joy’s question, therefore, deserves not to be ignored. Does the future need us?

Moshe Y. Vardi, EDITOR-IN-CHIEF

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More