It is a familiar trope in science fiction—humanity being threatened by out-of-control machines that have misinterpreted human desires. A not-insubstantial segment of the artificial intelligence (AI) research community is deeply concerned about this kind of scenario playing out in real life.
But what about the more immediate risks posed by non-superintelligent AI, such as job loss, bias, privacy violations, and misinformation spread? It turns out that there is little overlap between the communities concerned primarily with such short-term risks and those who worry more about longer-term alignment risks.
From Quanta Magazine
View Full Article