Artificial Intelligence Gets Aggressive Without Being Provoked

Posted: Feb 13 2017, 12:30pm CST | by , in News | Technology News

 

Artificial Intelligence Gets Aggressive Without Being Provoked
Photo Credit: Getty Images
 

Last year, we reported on Stephen Hawking issuing a warning that the continuation of of AI will either be "the best, or the worst thing, ever to happen to humanity."

We've all seen the worst of the Terminator movies and the apocalypse that could hit us. However, now that there are recent behavior tests from Google's DeepMind AI system, it is clear that it is all possible.

In tests last year, the DeepMind system showed that it could learn independently from its memory and beat the world's best Go players at the game.

Since learning, it has been trying to figure out how to mimic a human voice.

Researchers have also been testing its willingness to cooperate with others, and found something a little troubling: when DeepMind feels like it is going to lose, it opts for a "highly aggressive" strategy that will ensure it comes out on top. They tested this theory during 40 million runs of a fruit gathering game where two DeepMind agents competed against each other to gather as many apples as they could. They found that things were smooth when there were enough apples to go around, but as soon as their numbers dwindled, they became aggressive. They used laser beams to knock each other out of the game to steal the apples.

Watch the video below to see what happened. The DeepMind agents are in red and blue and the virtual apples in green:

Interestingly, if an agent tags an opponent with a laser beam, there isn't an extra reward. It simply knocks the opponent out of the game for a set period. This allows the other agent to collect more apples. 

If the agents didn't use the beams, the would end up with the same number of apples, which is what "less intelligent" iterations of DeepMind opted to do. 

It was only when they became more complex did DeepMind turn to greed, sabotage, and aggression.

According to Gizmodo, the researchers used smaller DeepMind networks as agents, it was more likely to be peaceful. When they used larger, more complex networks as agents, the AI was more willing to sabotage its opponent. 

Researchers suggest that the more intelligent the agent, the better it became at learning from its environment, which allowed it to be highly aggressive.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," one of the team, Joel Z Leibo, told Wired. "Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

When DeepMind played a second game called Wolfpack, they used three AI agents: two playing wolves and one playing prey. They needed to cooperate because if they took down the prey, they both got the reward.

"The idea is that the prey is dangerous - a lone wolf can overcome it, but is at risk of losing the carcass to scavengers," the team explains in their paper. "However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward."

Just as the DeepMind agents learned that aggression and selfishness netted them the best results, they learned that co-operation could also help them. 

Tread carefully - they might just be coming.

This story may contain affiliate links.

Comments

The Author

<a href="/latest_stories/all/all/46" rel="author">Noel Diem</a>
Noel passion is to write about geek culture.

 

 

Advertisement

comments powered by Disqus