Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences? Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent superintelligence is an existential risk for humanity.
But one can speculate endlessly. It's better to ask a more concrete, empirical question: What would alert us that superintelligence is indeed around the corner?
We might call such harbingers canaries in the coal mines of AI. If an artificial-intelligence program develops a fundamental new capability, that's the equivalent of a canary collapsing: an early warning of AI breakthroughs on the horizon.
Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can't distinguish conversing with a human from conversing with a computer. It's an important test, but it's not a canary; it is, rather, the sign that human-level AI has already arrived. Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones.
From Technology Review
View Full Article