Elon Musk, Stephen Hawking and Steve Wozniak put their weight behind an initiative to call on governments to ban autonomous weapons.
Elon Musk shared on Twitter a link to an open letter asking for a ban on military artificial intelligence powered weapons.
How To: Buy a Pokemon Go Plus
The open letter supported for instance by Stephen Hawking and Steve Wozniak describes autonomous weapons as the third revolution in warfare, after gunpowder and nuclear arms.
Artificial Intelligence (AI) technology has reached a point where the deployment of such weapons is feasible within years, not decades says the Future of Life Institute in the letter.
The letter makes the point that it is time now to prevent governments to invest in AI weapons before it is too late.
"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce."
The research into AI weapons has though already begun. Below is a video of a demonstration of an autonomous swarm developed by the US Navy Research. The video makes clear how real AI weapons are already.
If Elon Musk takes the time out of his busy schedule to put is support behind a cause like this it is serious. He is surrounded with science the whole time. If he is concerned, "normal geeks" should be concerned to. The open letter can be signed here.
The open letter has bene officially announced at the opening of the IJCAI 2015 conference on July 28.
Don't Miss: iPhone 8: Everything You Need to Know
The Future of Life Institute is a volunteer-run research and outreach organization working to mitigate existential risks facing humanity. We are currently focusing on potential risks from the development of human-level artificial intelligence.