logo

Google’s DeepMind AI Gets Aggressive to Win!

Google’s DeepMind AI system can take to aggressive behaviours when such strategies benefit itself, shows a new research.

Google’s new AI aggressive ‘behaviour’ is reminiscent of what movies have taught us: that artificial intelligence/ robots will always turn against us. This has been drilled into our minds, from the Terminator movies to the Avengers: Age of Ultron. If AI is not directly bad news for us, it can still be used by other humans to do evil, as Marvel’s Agents of Shield has recently showed us. Regardless of fact vs fiction, the newly-made behaviour tests entailing DeepMind AI system of Google’s do constitute a warning for us to be careful of where we’re going with this technology.

It was already known that DeepMind AI could learn by itself. Now, it has exhibited a more terrifying ability: it takes to aggressive methods when facing a challenge from an opponent just so it wins a game! When it fears lest it loses to its rival, it employs hostile strategies to get what it wants. This was the result of an apple-collecting video game. Two agents of DeepMind were competing against each other, and the trick was to glean the most number of virtual apples. The game was all play when the number of apples was adequate. However, with fewer and fewer apples left to be collected, the AI agents grew aggressive, and shot laser beams at each other in order to grab the apples.

View the video of the two AIs going after virtual apples:

It seems that the AI can understand greed, sabotage, and aggression when the game gets tough. Also, it is to be noted that this was observed when larger and more complex networks for agents were used. On the other hand, when the agents were of smaller DeepMind networks, the probability of peaceful co-existence was greater, as pointed out by the authors. The researchers explain that a more intelligent agent was better able to learn from its environment such that it could put together more aggressive strategies to win. One of the authors, Joel Z Leibo, says that this learning resulted in certain “aspects of human-like behaviour”.

“Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

If your mind hasn’t come up with paranoia-conspiracy theories yet, hear about this: not only can DeepMind judge that aggression and selfishness will make it win in a certain situation, but it can also learn that co-operation can lead to victory under other circumstances. The researchers found this after they subjected DeepMind to another video game called Wolfpack. Three AI agents were involved this time, and two of them played wolves while the third one was the prey. When both ‘wolves’ were close to the prey at its capture, they received a reward each, even if only one of the wolves got the prey. Having both wolves near the prey was based on the concept that two could protect the carcass from scavengers better than one, and so a reward was given out to both. DeepMind agents, thus, learned that mutual help would be preferable so that each of them could be individually successful in the given situation.

This study shows that if we were to have AI systems in control of aspects of real-life situations, their objectives would need to be balanced against the aims of bringing benefit to us humans—if not, a war could start among the different AI ‘agents’. To get a better idea, consider this situation: if cars are being controlled by AI systems and not human drivers, while traffic lights are working, slowing them down, what will happen if the driver-less cars want to employ a faster route?

Careful, careful, humans!

Google+

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share this article.

Share this post with your family and friends by clicking one of the social network buttons below to help us spread the word. Thank you.