advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

How do bot accounts influence conversations about vaccines?

The internet has given a soap-box to many admirable causes but in the same breath we can’t ignore the not-so-admirable.

Among those are climate change deniers, flat earthers and more recently, folks opposed to vaccines.

Which leads us nicely into the crux of this story. Late last year a paper titled “Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate” was published to the American Journal of Public Health.

In the study researchers sought to determine whether bots tweeted more about vaccines than the average Twitter denizen. The aim was not to see how bad the situation was but rather how fake accounts were adding to the global discourse.

“Three of the authors coded relevant tweets as “provaccine,” “antivaccine,” or “neutral” using a codebook developed by 1 of the authors. When coders disagreed, we employed a second round of annotation. We resolved any remaining disagreements by a fourth annotator,” the study reads.

The researchers discovered that known Twitter bots were more likely to tweet about vaccines than the average Twitter user. Russian trolls (which are also bots) are also more likely to tweet about vaccine preventable diseases but not about vaccines and recognisable spambots are less likely to tweet about vaccine-preventable diseases than the average Twitter user.

“One strategy used by bots and trolls is to generate several tweets about the same topics, with the intention of flooding the discourse. Thus, to better understand the behavior of each type of account, we examined the total proportion of tweets that were generated by unique users—a possible indicator of bot- or troll-like behavior—to assess whether accounts with higher bot scores exhibited such behavior,” the study reads.

The core of the findings suggest that Twitter bots and trolls have a significant impact on online communications about vaccines. Whether good or bad, these accounts amplify the discussion and bring it to the fore.

“Russian trolls and sophisticated bots promote both pro- and antivaccination narratives. This behavior is consistent with a strategy of promoting political discord. Bots and trolls frequently retweet or modify content from human users. Thus, well-intentioned posts containing provaccine content may have the unintended effect of ‘feeding the trolls,’ giving the false impression of legitimacy to both sides, especially if this content directly engages with the antivaccination discourse. Presuming bot and troll accounts seek to generate roughly equal numbers of tweets for both sides, limiting access to provaccine content could potentially also reduce the incentive to post antivaccine content.”

The findings conclude by urging health practitioners to combat false messages or rather, drown out the noise of bots with relevant content.

“Beyond attempting to prevent bots from spreading messages over social media, public health practitioners should focus on combating the messages themselves while not feeding the trolls. This is a ripe area for future research, which might include emphasizing that a significant proportion of antivaccination messages are organized “astroturf” (i.e., not grassroots) and other bottom-line messages that put antivaccine messages in their proper contexts,” the report concludes.

So then, this presents an interesting question that we feel warrants asking: should medical practitioners and scientists be more active online? While we’re not saying they should dispense medical advice on a per-patient basis perhaps it’s time for those of us with the qualifications to start presenting facts in a space that sorely needs them.

We highly recommend giving the full study a read over at the American Journal of Public Health’s website.

[Image – CC 0 Pixabay]

advertisement

About Author

advertisement

Related News

advertisement