Does Twitter’s neural network have a bias problem? It sure looks like it

Share on facebook
Share on twitter
Share on linkedin
Share on email

Back in 2018 Twitter introduced a new way that it would crop image previews.

The social network announced that it would use a neural network and potentially salient points (areas of an image your eyes are most likely to be drawn to) to predict what you want to see in an image.

At the time we lauded it as an end to the often annoying “open for a surprise” tweets that were doing the rounds but now it seems Twitter’s neural network has more problems than memes.

The big problem is racial bias. How can a piece of software be biased you might ask, well people need to programme that software and if your development team lacks representation from different races and cultures, it’s a recipe for disaster.

This is especially true with artificial intelligence and neural networks as Tay taught us.

So how does this relate to Twitter and image cropping?

At the weekend, several Twitter users took to the social media platform to see how it would crop images feature a black man and a white man.

In the tweet below via The Verge, the images contain Mitch McConnell and Barack Obama though, you wouldn’t say that from the previews.

The problem here is that given Twitter’s explanation of how its neural network works, it is implied that folks want to see a white face more than a black face.

And it’s not just Twitter. Zoom appears to be causing problems for black people who want to use virtual backgrounds and their heads get cropped out of the image.

When a Zoom user tried to share the problem they were having on Zoom via Twitter, Twitter cropped the black gentleman out of the screenshot.

Now, to its credit, Twitter has said it will investigate this further with Twitter communications team member Liz Kelley saying, “We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. We’ll open source our work so others can review and replicate.”

The question we have is, did the neural network learn this behaviour after two years out in the wild or was it taught? It’s possible that Twitter’s user base influenced the neural network in some way but to determine the exact reason the above is happening requires further investigation.

Now, this cropping doesn’t happen all of the time but it appears to be happening often enough to warrant addressing.

While it might not seem like a big problem to some, image recognition is being used more and more in the big wide world and in some instances those systems can have an inherent bias against people of colour whether purposefully or not.

We’re curious to see how Twitter addresses this and fixes the issues its community has discovered.

Brendyn Lotz

Brendyn Lotz

Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.