Computer program beats humans at “Space Invaders”


In this file photo from Oct. 5, 2004, a youngster tries out a Ms. Pac-Man game show in New York City. Computers have already beaten human champions in Jeopardy! and chess, but now artificial intelligence has taken over a whole new level: Space Invaders. Google scientists have concocted software that can outperform humans on dozens of 1980s Atari video games, like video pinball, boxing, and breakout. But they don’t seem to have a shadow of a chance against Ms. Pac-Man. (AP Photo / Richard Drew, file)

(AP) —Computers have already beaten human champions in “Jeopardy!” and chess, but artificial intelligence has now taken control of a whole new level: “Space Invaders”.

Google scientists have concocted software that can outperform humans in dozens of 1980s Atari video games, like video pinball, boxing, and “Breakout.” But computers don’t seem to stand a chance on “Ms. Pac-Man”.

The goal is not to turn video games into a sport spectacle, to turn couch potatoes that play games into couch potatoes that watch computers play games. The real accomplishment: Computers that can learn on their own to do tasks, learn from scratch, through trial and error, just like humans.

The computer program, called Deep Q-network, didn’t receive many instructions to get started, but over time it did better than humans in 29 out of 49 games and in some cases, like video pinball, it did. did it 26 times. better, according to a new study published Wednesday by the journal Nature. This is the first time that an artificial intelligence program has linked different types of learning systems, said study author Demis Hassabis from Google DeepMind in London.

Deep Q “can learn and adapt to unexpected things,” Hassabis said at a press conference. “These types of systems are more human in the way they learn.”

In the underwater game “Seaquest”, Deep Q devised a strategy that scientists had never considered.

“It’s really fun to watch computers discover things that you haven’t figured out yourself,” said study co-author Volodymyr Mnih, also from Google.

Sebastian Thrun, director of the Artificial Intelligence Laboratory at Stanford University, who was not part of the research, said in an email: “It’s very impressive. Most people don’t understand until where (artificial intelligence) has gone. And that’s just the beginning. “

Nothing in Deep Q is customized for Atari or for a specific game. The idea is to create a “general learning system” that can understand trial and error tasks and possibly do things that even humans have difficulty with, Hassabis said. This program, he said, “is the first rung of the ladder.”

Emma Brunskill, professor of computer science at Carnegie Mellon University, who was also not part of the study, said that learning despite the lack of customization “brings us closer to having multi-skilled workers equipped to learn well a wide range of tasks, instead of just chess or just ‘Jeopardy!’ “

Going from pixels on a screen to making decisions about what to do next, without even a hint of preprogrammed advice, “is really exciting,” Brunskill said. “We do it as people.”

The idea is that when the system is scaled up it could maybe work like asking a phone to plan a full trip to Europe, booking all flights and hotels on its own “and everything works out. like you have a personal assistant, ”Hassabis said.

But for some thought patterns, Deep Q wasn’t even as smart as a toddler because he can’t transfer learned experiences from one situation to another and he doesn’t get abstract concepts, said. Hassabis.

Deep Q has had issues with “Ms. Pac Man” and “Montezuma’s Revenge” because these are games that involve more planning ahead, he said.

Then scientists will test the system on more complicated games from the 1990s and beyond, perhaps a complicated game like “Civilization”, where players create an entire empire to see if it can stand the test of time. time.

Deep Q doesn’t show what Hassabis would call creativity, he said, “I would call that discovering something that already existed in the world.

The creativity would be if the program created its own computer game, Hassabis said. Artificial intelligence is not there, he said.

At least not yet.


Google partners with Oxford to teach thinking machines


More information:
Nature 518, 529-533 (February 26, 2015) DOI: 10.1038 / nature14236

© 2015 The Associated Press. All rights reserved.

Quote: HAL wins: Computer program beats humans at ‘Space Invaders’ (2015, February 25) retrieved November 4, 2021 from https://phys.org/news/2015-02-hal-bests-humans-space -invaders.html

This document is subject to copyright. Other than fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.


Gordon K. Morehouse