By now the message that diversity is key when creating technology should be incredibly clear. For those who haven’t received that message yet, check out things like Tay or the AI recruiter that was biased toward white men.
At Devconf 2019 in Johannesburg this morning, chief technology officer at Dexterity Digital, Clifford de Wit presented his keynote titled Age of the Modern Developer: A Revolution in our Midst.
The talk took a turn when de Wit pointed out that software and hardware have started to become intrinsically linked in our modern era. With this in mind, the CTO posits that developers of this technology need to seriously consider the consequences their solutions have on the world at large.
Autonomous vehicles for example represent a coming together of hardware and software. As you might be aware, an incomprehensible amount of time and money has been spent on trying to make AVs as safe as possible. Even with millions being sunk into research and development, things can go wrong.
In terms of hardware, forgetting heatpads on a consumer GPU might put the customer out but what if those same heatpads could have helped prevent a fire in an autonomous vehicle.
As extreme of an example as that is, developers must start taking these into consideration according to de Wit, who references the growing sector of artificial intelligence.
“Developers must think of the bigger picture. When it comes to AI we need to think deeply about how it’s trained and how it will be monitored so that we don’t end up with something like Microsoft’s Tay,” de Wit says.
The CTO tells the conference about an AI solution that was trained to recognise dogs and was fed an image of a wolf as well. While identifying images the developers noticed that every time an image of a Husky appeared the AI marked it as a wolf.
After some investigation it was discovered that the image of the wolf was corrupted and only displayed portions of the image on a white background. Simply put, the AI determined that anything that was on a background of white (or as humans know it, snow) was a wolf.
This is such a simple thing to have missed and while it was harmless there are times where it isn’t.
The aforementioned example of a bot that favoured white male job applicants is a serious problem in an era where transformation in the workplace is integral to our society. Worse still, an AI trained to help judges sentence criminals was biased against black people resulting in them serving longer sentences than more dangerous white criminals.
“Developers must think of the bigger picture,” reiterates de Wit. “You are going to be the creators and trainers and it’s your responsibility to ask ‘how should I be doing this?'”
And asking yourself is how things appear to be for the foreseeable future. The International Organisation for Standardisation has only recently begun working on standards for machine learning and AI with de Wit adding that we’re still two years out from a complete standard.
As you might’ve guessed technology doesn’t wait for standards and therefore it’s the responsibility of developers to consider both how their solution works in the real world and how it could go horribly wrong.
While this is a massive responsibility it’s made easier through diversity.
“Solutions should be tested across disciplines whether it be tech, business, legal or social science. Development is no longer just about functionality,” says de Wit.
The job of the developer also doesn’t end at deployment. When employing ML or AI, algorithms should constantly be monitored and maintained, lest you have husky dogs being marked as wolves.[Image – CC 0 Pixabay]