The past few days have not been great for Elon Musk, after he accused one of the Thai cave rescuers of being a pedophile on Twitter.

Now the SpaceX and Tesla founder is in the news, this time for a more tech-related venture. Musk, along with several other high-profile figures in the AI development field have signed a pledge to not create lethal AI weapons systems.

The pledge was organised by the Future of Life Institute and published in the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm.

The Institute has worked to raise awareness about the potential dangers that imbuing weapons systems, or any other device that threatens human life, with Artificial Intelligence poses.

In the past, the Institute has tapped the likes of Musk and other pledge signees when sending letters to the United Nations to consider implementing new regulations for lethal autonomous weapons.

Some of the notable pledge signees, apart from Musk, include the founders of DeepMind, Skype founder Jaan Tallinn and leading AI researchers like Stuart Russell.

Whether this latest pledge has the desired outcome of limiting the production of AI weapons remains to be seen, especially with the business of war proving quite profitable for the likes of the US.

You can read the full pledge below:

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilising for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatising and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organisations, as well as leaders, policymakers, and other individuals, join us in this pledge.

[Image – CC0 Pixabay]