In an article published Feb. 25 in the journal Nature, a team of artificial intelligence researchers describe the method they used to train a computer to play, and in some cases beat, human players in several classic arcade games.
The DeepMind team, acquired by Google about a year ago, describe what they call the Deep-Q-Network (DQN), a combination of a deep neural network and a technique known as Q-learning. The deep neural network is similar to networks that have been used to mimic animal vision and enables the system to "see" a video game the way a human would, as pixels on a screen.
Meanwhile, Q-learning is a mathematical version of a psychological concept called reinforcement learning thought to be key to the way humans and animals learn. The researchers say DQN plays games and learns how to get better at them in a way that mimics human players. DeepMind applied DQN to 49 classic Atari 2600 games from the 1980s and found the network was able to score better than the best human players on about half of them.
Founder Demis Hassabis says DeepMind researchers are now working on "knowledge transfer," teaching DQN how to apply the lessons it learns from playing one game to another.
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA