Twitter’s investigation into image cropping bias reveals algorithms are problematic

Share on facebook
Share on twitter
Share on linkedin
Share on email

Earlier this month, Twitter stopped cropping images and while it wasn’t confirmed that this was because of a biased algorithm that highlighted white faces over black faces, we had our suspicions.

Twitter has now confirmed that yes, its image cropping or saliency algorithm did indeed have some degree of bias and that is indeed why #NoCrop Twitter is now a thing.

Testing the algorithm to confirm whether it was indeed biased, wasn’t as easy as placing an image on Twitter though.

“To quantitatively test the potential gender and race-based biases of this saliency algorithm, we created an experiment of randomly linked images of individuals of different races and genders,” writes director of software engineering at Twitter, Rumman Chowdhury.

“If the model is demographically equal, we’d see no difference in how many times each image was chosen by the saliency algorithm. In other words, demographic parity means each image has a 50% chance of being salient,” adds the director.

Here’s what Twitter found.

  • In comparisons of men and women there was an 8 percent difference from demographic parity in favour of women
  • In comparisons of black and white individuals, there was a 4 percent difference from demographic parity in favour of white individuals
  • In comparisons of black and white women, there was a 7 percent difference from demographic parity in favour of white women
  • In comparisons of black and white men, there was a 2 percent difference from demographic parity in favour of white men

Twitter also wanted to test whether its algorithm for objectification bias also referred to as male gaze. This is where the algorithm may have cropped an image to showcase a woman’s chest or her legs rather than her face.

“We didn’t find evidence of objectification bias — in other words, our algorithm did not crop images of men or women on areas other than their faces at a significant rate,” writes Chowdhury.

Looking at the numbers as regards objectification bias, Twitter reports that for every 100 images per group it tested, “about three” cropped at a location other than the head and when it did crop at other places it wasn’t a physical aspect but something like a number on a sports jersey.

So what is the lesson here? Well for one, be very cautious when creating an algorithm that is intended for use around the world and insure the team working on it is diverse.

The second bonus lesson is, actually, we’ll let Chowdhury explain.

“We considered the tradeoffs between the speed and consistency of automated cropping with the potential risks we saw in this research. One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people,” the director said.

And that’s perhaps the most important lesson in all of this.

Twitter decided that folks wanted to see more tweets at a glance and created an algorithm to solve its problem and in the process created an even worse problem it has now spent the better part of a year solving.

Image saliency is – from a technical aspect – a clever idea, but that’s just it, it’s an idea and if we’re honest, the testing Twitter detailed above should really have happened before the algorithm was set loose on the world.

On the back of this we highly recommend that our readers check out the documentary Coded Bias on Netflix. The doccie details how our intrinsic biases get coded into technology and with technology touching every part of our lives, how dangerous these biases can become.

[Source – Twitter Engineering]

Brendyn Lotz

Brendyn Lotz

Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.

NEWSLETTER

BE THE FIRST TO KNOW