advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Three principles to focus on when building responsible AI

Artificial intelligence and machine learning are rapidly starting to form a basis of all technology, both for consumers and enterprises alike. For the former it’s taking the guise of an innocent ageing application like FaceApp, but its Russian developers are raising more than a few eyebrows.

This leads us to a growing concern when it comes to developing AI – responsibility.

It’s something that should form the basis of any AI development, but quite often gets overlooked to the detriment of the end-user.

To that end a universal set of ethics for AI and machine learning is constantly being looked at, but such responsibilities need to fall to every single organisation to consider and adhere to.

This is the thinking of Lee Naik, CEO of TransUnion Africa, who explains that we all have a role to play when it comes to responsible AI.

“By their very nature, artificial intelligence technologies are somewhat unpredictable, but there are ways to make sure you’re always on the right track,” he notes.

“There are many strategies, guidelines and even Asimov’s laws of robotics to lean on but for every organisation, I believe there are three key principles to throw into the mix,” adds Naik. 

Rule of three

As for what those principles are, the first focuses on data being the key. He says this specific principle is particular true when starting out on an AI project, especially as the wrong sets of data can often lead to highly biased and flawed applications.

Any digital technology is only as good as its data, and this is even truer for artificial intelligences that teach themselves over time. Throw in enough flawed data early on and suddenly you’ve got a vicious cycle of bad assumptions multiplying other bad assumptions. Any AI is going to need a wide variety of reliable data streams to learn effectively,” Naik points out. 

The next principle for creating responsible AI is using purpose as your guiding light. While that may sound a tad too philosophical on the face of it, as Naik explains, it has more to do with the intention with which an AI application is developed.

Ultimately, any algorithm needs a reason to learn in the first place, a clear guiding purpose. Without a strong purpose, it’s easy for an ML application to go astray, and just as easy for the people behind the application to miss that it’s happening,” he adds. 

The third principle is rather simple when you think about it – learn from your machines.

Here Naik is talking about a developer’s over emphasis on the machine aspect of the machine learning process, often resulting in teams missing mistakes that applications make. 

“It’s important not to get too focused on the ‘machine’ part that you forget about the ‘learning’. The power of cognitive technologies isn’t in how smart they are but in how much they enable us to embrace learning and continuous improvement as a way of life. Make sure your data and technology teams embody this, so that they are learning from the deviations and the unexpected detours of your AI applications,” the CEO says.

It therefore looks as if the ability to create responsible AI is in having a human team of developers that too is responsible in the ways they do things.

Ultimately, the best way to extract real value and outcomes from technology is to make sure there are good people with good intent and accountability behind them. Because if you’re not willing to take ownership, someone else will find a way,” Naik concludes. 

[Image – Photo by Franck V. on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement