advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

AI leaders sign an open letter to prevent the robot apocalypse

Artificial intelligence (AI) is still in its infancy, but a host of the leading experts in the field have pledged to monitor the worldwide development of AI to try and ensure that a science-fiction-like robot apocalypse doesn’t occur.

AI is one of those things that could be the greatest thing to happen to humanity, or the beginning of the end for human civilisation. In fact, over the last few months we’ve had warnings from some of the most high profile, smart people on the planet about the dangers of getting AI wrong. Both Stephen Hawking, widely regarded to be one of the smartest humans alive, and Elon Musk the founder of SpaceX, Tesla and a former investor in the AI research company Deep Mind which is now owned by Google, have publicly stated that AI could end human life on earth.

While the open letter from the Future Life Institute doesn’t come right out and say that we should be weary of a robot apocalypse one line in particular gives away the fact that it has crossed their minds. “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.” the letter states.

Those that have signed the pledge already include the co-founders of Deep Mind, several MIT professors and experts from IBM’s Watson supercomputer team and Microsoft Research.

Whether or not it will be enough to prevent the total destruction of the human race is uncertain, but it can’t hurt us to all pinky swear that we won’t make bad robots on purpose.

advertisement

About Author

advertisement

Related News

advertisement