An AI that was deemed too dangerous to be released has now been released into the world.

Researchers had feared that the model, known as "GPT-2", was so powerful that it could be maliciously misused by everyone from politicians to scammers.

GPT-2 was created for a simple purpose: it can be fed a piece of text, and is able to predict the words that will come next. By doing so, it is able to create long strings of writing that are largely indistinguishable from those written by a human being.

Download the new Indpendent Premium app

Sharing the full story, not just the headlines

But it became clear that it was worryingly good at that job, with its text creation so powerful that it could be used to scam people and may undermine trust in the things we read.

What's more, the model can be abused by extremist groups to create "synthetic propaganda" that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis, for instance.

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," wrote OpenAI in a February blog post, released when it made the announcement. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

At that time, the organisation released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.

The full version is more convincing than the smaller one, but only "marginally". The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.

It hopes that the release can partly help the public understand how such a tool could be misused, and help inform discussions among experts about how that danger can be mitigated.

In February, researchers said that there was a variety of ways that malicious people could misuse the programme. The outputted text could be used to create misleading news articles, impersonate other people, automatically create abusive or fake content for social media or to use to spam people with – along with a variety of possible uses that might not even have been imagined yet, they noted.

Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.

"These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns," they wrote. "The public at large will need to become more skeptical of text they find online, just as the “deep fakes” phenomenon calls for more skepticism about images."

The researchers said that experts needed to work to consider "how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures".

Social media is an increasingly important battle ground in elections - and home to many questionable claims pumped out by all sides. If social media sites won't investigate the truth of divisive advertising, we will. Please send any political Facebook advertising you receive to digitaldemocracy@independent.co.uk, and we will catalogue and investigate it. Read more here.

Comments

Share your thoughts and debate the big issues

Learn more
Please be respectful when making a comment and adhere to our Community Guidelines.
  • You may not agree with our views, or other users’, but please respond to them respectfully
  • Swearing, personal abuse, racism, sexism, homophobia and other discriminatory or inciteful language is not acceptable
  • Do not impersonate other users or reveal private information about third parties
  • We reserve the right to delete inappropriate posts and ban offending users without notification

You can find our Community Guidelines in full here.

Create a commenting name to join the debate

Please try again, the name must be unique Only letters and numbers accepted
Loading comments...
Loading comments...
Please be respectful when making a comment and adhere to our Community Guidelines.
  • You may not agree with our views, or other users’, but please respond to them respectfully
  • Swearing, personal abuse, racism, sexism, homophobia and other discriminatory or inciteful language is not acceptable
  • Do not impersonate other users or reveal private information about third parties
  • We reserve the right to delete inappropriate posts and ban offending users without notification

You can find our Community Guidelines in full here.

Loading comments...
Loading comments...