Luminaries including Stephen Hawking and Elon Musk recently have warned about the potential threat artificial intelligence (AI) poses to the human race in terms that strike some as fantastical. New York University professor Gary Marcus, CEO of Geometric Intelligence, says Hawking and Musk have a point, but the existential threat they fear is still many decades off and people face somewhat different threats from AI in the nearer term. Marcus says "superintelligent" machines are unlikely to arrive soon, but we are already in the process of placing a great deal of power and control in the hands of automated systems and need to be certain those systems can handle it.
Stock markets and autonomous driving technology are two examples of automated systems that could do tremendous damage if not properly and rigorously controlled, Marcus says.
Although he acknowledges such technologies have tremendous potential to do good, he says steps have to be taken to ensure they do not go haywire. Marcus says those steps could include funding advances in program verification and establishing laws surrounding the use of automated systems in specific, risky applications.
From The Wall Street Journal
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA