advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Just in case, here’s how HP Enterprise thinks we can avoid an AI apocalypse

At the weekend a conversation about artificial intelligence (AI) devolved into friends voicing their fears about a machine uprising.

Granted, we don’t know if humanity will one day create a super intelligent AI that decides we’re too flawed to be ruling the planet.

What we do know is that AI has already found its way into our lives whether it be GPS navigation or your music streaming service recommending a new band for you to listen to.

So where is AI headed and how can we prevent the former scenario from happening.

Technology strategist at HP Enterprise, Sorin Cheran has shared his thoughts on how AI may change in the forthcoming years

We’re still learning about AI

Over the last 50 years humans have been developing AI in various forms but Cheran says that we are still in a period of discovery regarding the technology.

“Despite the growing use of AI, the world is still in a period of relative discovery, researching the possibilities of deep learning and AI. Over the next five to ten years, AI investments will likely be more siloed, looking at solving problems in business, automating processes and simplifying daily life,” he says.

In future AI projects will focus on solving specific problems rather than broader objectives.

That having been said Cheran says that in developing AI we must examine the ethics of AI to avoid the possibility of a robot uprising.

Speaking of ethics

Seran says that while developing AI, developers must be aware of the quality of data that machines are fed to avoid bias.

This is a rather important part of developing AI and firms working with the tech such as Microsoft, hammers home the importance of being aware of bias when developing machines that can learn.

“We need to focus on how to make AI safe, how to avoid the misuse of autonomous machines, and how to avoid bias within AI solutions,” says Cheran.

The strategist says that humans should still play a role in verifying data fed to a machine as well ensuring that data it produces are accurate and without bias.

Laws and autonomy go hand-in-hand

Earlier this year an Uber car driving autonomously killed a pedestrian crossing the road. This incident prompted many to once again ask, who is responsible when an autonomous vehicle kills a person.

For anybody that has been asked that question before you might be aware that it isn’t as easy to answer as it appears.

“Placing blame on an AI machine, such as an autonomous car, means assigning rights to the machine,” says Cheran.

“It’s clear that laws and governance standards need to be created to clarify the roles and responsibilities of both people involved in and using AI, and the AI devices themselves. Autonomy also requires a human level of judgement, ensuring a safety check is in place so that reactions aren’t triggered by accidental actions, such as an autonomous weapon firing because an alarm was accidentally set off.”

The strategist goes on to say that despite how far AI has come in recent years we still have a ways to go before an intelligent machine that resembles the intelligence of humans is developed.

In the meantime however we cannot ignore the fears folks have about losing their jobs to AI. Predictions from Gartner posit that by 2020 1.8 million jobs will be lost to AI but 2.3 million jobs will be created.

“Technology – AI – can solve so many problems. It has the potential to solve issues such as climate change, population control, food creation, and even grave diseases such as cancer and HIV. We need to embrace it, or risk going backwards, but we need to do so with caution and preparation; an ethical approach to AI,” concludes Cheran.

advertisement

About Author

advertisement

Related News

advertisement