Home → News → A More Human Approach to Artificial Intelligence → Full Text

A More Human Approach to Artificial Intelligence

By Nature

July 31, 2019

[article image]

Where does the mind stop and the rest of the world begin? When Andy Clark, a philosopher at the University of Edinburgh, U.K., asked this question in the 1990s, it was a world without deep learning or smartphones. As technology has developed, his argument that the boundary between cognition and the environment is porous has deepened. He spoke to Nature about the state of intelligence research and how a truly intelligent machine needs not only a mind, but also a body.

What has been the most important advance in cognitive science during your career?

There have been two main advances since I joined the philosophy and cognitive-science community in 1984. The first is the development of artificial neural networks, which are computer systems inspired by the way that neurons interconnect in the brain. Then, in the past decade, a particular theory of how the brain works has emerged that is consistent with that research. We now have an idea of the brain as a probabilistic-prediction device. That, for my money, is the most exciting advance.

Why does artificial intelligence (AI) need more data than brains do to perform the same task?

Our brains have been tailored by millions of years of evolution to help us deal with the kinds of object and structure that we're likely to encounter in the world. AI systems, however, start pretty much from scratch. There's also an architectural difference. A lot of deep-learning systems, which use many layers of artificial neurons to progressively extract features from raw data, do not work in a top-down, prediction-driven way, unlike the brain. They work in a more feed-forward way.

What is the difference between predictive and feed-forward approaches?

A feed-forward approach starts with some input, and works its way forwards, network layer by network layer, to deliver a result. It can be trained using plenty of feedback signals, but once trained, it can only map inputs to ever-deeper representations. This means that it can't benefit from the iterative, context-sensitive checking that is so characteristic of the brain, and is the hallmark of biological intelligence.


From Nature
View Full Article


No entries found