Google's DeepMind artificial intelligence (AI) has been upgraded to defeat human players in 1980s arcade games even better than before, thanks to refinements to its reinforcement learning software. Its performance has been improved so it can beat people in 31 games, up from 23 games in an earlier iteration.
The updates have brought DeepMind close to the performance levels of a human expert in various games, such as Up and Down, Zaxxon, and Asterix.
The system has not been coached on how to win the 49 Atari games it is exposed to; instead it plays each of the games over seven days, gradually improving its performance.
The AI employs a deep neural network, with each linked layer of computer nodes tasked with feeding data back through the layers to top-level neurons that make the final call on what DeepMind needs to decide. The system's Deep Q-network is fed pixels from each game, and it applies its reasoning powers to determine the distance between objects on screen and other factors. The system also builds a model of which action will lead to the best results by studying the score achieved in each game.
The updated DeepMind makes fewer mistakes when playing the games by reducing the odds of it overestimating a positive outcome from a specific action.
From Tech Republic
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA