Creators of AI agents that can play Dota 2, OpenAI, have created a new algorithm it calls GPT-2 that writes fake news. While this isn’t exactly ground breaking, the algorithm is rather good at it.

How good?

When fed with the starting line “Russia has declared war on the United States after Donald Trump accidentally …” the algorithm wrote the following story.

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

That entire story is false save for the last bit about Moscow annexing Crimea in 2014, which is true.

Now, to be clear, OpenAI didn’t set out to create an algorithm that’s a master of writing fake news. A report by MIT Technology Review suggests that the Open AI researchers set out to develop a general-purpose language algorithm that could be used to translate text, answer questions and other tasks.

It’s noted that the algorithm doesn’t always produce passable content

“Overall, we find that it takes a few tries to get a good sample, with the number of tries depending on how familiar the model is with the context. When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50% of the time,” OpenAI said in a blog post.

The danger in all of this is that AI is becoming frighteningly good at creating fake news and if the process is automated (which it can be) it might become rather difficult to spot real from fake.

OpenAI has released a smaller version of GPT-2 without the dataset, training code, or GPT-2 model weights for other developers to look at. The organisation hopes that this hesitance to publish its complete research will positively influence others in the AI development community.

“Other disciplines such as biotechnology and cybersecurity have long had active debates about responsible publication in cases with clear misuse potential, and we hope that our experiment will serve as a case study for more nuanced discussions of model and code release decisions in the AI community,” said OpenAI.

The GPT-2 algorithm can be found over at Github.

[Image – CC 0 Pixabay]