Fake and misleading news is still making headlines good enough to share and Facebook seems to want the same people sharing those stories to police its platform.
Screen grabs of the survey beneath articles from Rolling Stone and Chortle have appeared on Twitter but we haven’t seen the survey pop up on any stories in our timelines. The survey asks users, “To what extent do you think that this link’s title uses misleading language?” which is then followed by five options marked: not at all, slightly, somewhat, very much and completely.
— Chris Krewson (@ckrewson) December 5, 2016
Last month Google and Facebook stated that they would be implementing policies that dealt with fake and misleading news more aggressively, but Facebook made no mention of asking its user base to police its platform. To that end, given that only a few users have spotted the survey, it’s not beyond the realm of possibility that Facebook is testing this survey out among a small group of users.
We have contacted Facebook’s local representatives to confirm this and we will update this story should we receive a response.
At this point we need to point out the elephant in the room. Asking the same folks that share fake news to rate that news as fake seems like a bit of a misnomer. For instance, if you are willing to share a story based on only the headline – which happens more often than folks care to admit – what motivation is there for you to do further research?
Of course the argument needs to be made that Facebook alone cannot police fake news as its algorithm so gloriously illustrated earlier this year. Perhaps then a two-pronged approach in which users on a quest to cull their timelines of fake news, as well as Facebook’s machine learning, might be more effective together.