How computers beat humans at their own games

Computers have long become part of our daily lives for everything from productivity to entertainment. But computers have also been shown to become increasingly adept at playing games.

Ever since IBM’s Deep Blue computer managed to beat world chess champion Garry Kasparov in 1997, we’ve started to see how computer intelligence has taken some sizeable steps forward.

With some stunning advances in artificial intelligence, we are only just starting to get a glimpse of how computers will soon be able to beat us humans at any game we choose.

But is there any kind of game that a computer cannot play? Obviously, it would be fairly hard for a computer to come up with a lottery system like the one found via this link as such games are purely based on random numbers. But for the majority of rule-based games, it seems that computers are one step ahead.

A defining moment came in 2015, when Google’s AlphaGo artificial intelligence managed to beat leading players of the Go game. What was remarkable about these performances was the way in which the computer used surprising levels of creativity to outwit its opponents.

From here, we’ve seen a number of artificial intelligence programs that have been designed to grapple with some of the world’s most popular video games. These video games have been useful for the development of artificial intelligence as they require rule-based learning and problem-solving abilities.

While most of use will enjoy a few rounds of Nintendo’s Super Smash Bros. Melee as mindless entertainment, MIT’s supercomputer used the game to learn some of the idiosyncrasies of physical human behaviour while comfortably beating the world’s top 100 Super Smash Bros players.

Even multiplayer first-person shooters like Quake III Arena have been conquered by artificial intelligence. In 2018 we saw the DeepMind AI displaying the kind of teamwork and trickery needed to succeed in the capture-the-flag mode of this hit Activision game. This was significant as it was found that computers could learn human-like behaviours such as following team-mates and prioritising certain tasks.

But perhaps the biggest breakthrough for video gaming computers came when the AlphaStar managed to succeed in the fiendishly tricky StarCraft II game. This game is well-known as being one of the most difficult esports, but AlphaStar revealed itself to be a decent competitor that was able to lean from the behaviours of real-life players.

However, there are still some limits that computers will have to overcome for certain video games. A recent Dota 2 tournament featured five computers that attempted to work together as a team, but failed in regard to certain characteristics such as being able to pick a balanced squad from the hundreds of available in-game characters.

Despite this, it’s clear that the advances made by computers in video games may have significant effects in the development of artificial intelligence. Not only will the machine-based learning help to produce a better-quality of AI opponent in video games, but there are a number of behaviours learned from gaming that could be applied outside of the video game industry.

From the teamwork skills necessary to win at a massively multiplayer game like Dota 2 to the simple mechanics of driving a car in a racing simulator like Gran Turismo, it seems that video games give artificial intelligence numerous ways to learn about all of the different tasks necessary to become human.

So while computers may not be able to help us predict the winning numbers in the lottery just yet, for any game that involves rules, skill and planning, it seems that computers are already on the winning side.