Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Meet Norman, the 'psychopath AI' that's here to teach us a lesson

A team of researchers from MIT trained the AI algorithm on the darkest corners of Reddit 

Anthony Cuthbertson
Saturday 09 June 2018 19:41 BST
Comments
‘When people say AI algorithms can be biased and unfair, the culprit is often not the algorithm itself but the biased data fed to it,’ say researchers
‘When people say AI algorithms can be biased and unfair, the culprit is often not the algorithm itself but the biased data fed to it,’ say researchers (MIT)

The development of artificial intelligence, Stephen Hawking once warned, will be “either the best or the worst thing ever to happen to humanity”. A new AI algorithm exposed to the most macabre corners of the internet demonstrates how we could arrive at the darker version of the late physicist’s prophecy.

Researchers at the Massachusetts Institute of Technology (MIT) trained its ‘Norman’ AI – named after the lead character in Alfred Hitchcock’s 1960 film Psycho – on image captions taken from a community on Reddit that is notorious for sharing graphic depictions of death.

Once there, Norman was presented with a series of psychological tests in the form of Rorschach inkblots. The result, according to the researchers, was the “world’s first psychopath AI”. Where a standard AI saw “a black and white photo of a baseball glove”, Norman saw “man is murdered by machine gun in broad daylight”.

The idea of artificial intelligence gone awry is one of the oldest tropes of dystopian science fiction. But the emergence of advanced AI in recent years has led to scientists, entrepreneurs and academics increasingly warning of the legitimate threat posed by such technology.

Billionaire polymath Elon Musk – who founded the non-profit AI research company OpenAI – said in 2014 that AI is “potentially more dangerous than nukes”, while Hawking repeatedly warned of the dangers surrounding the development of artificial intelligence.

Less than six months before his death, the world-renowned physicist went as far as to claim that AI could replace humans altogether if its development is taken too far. “If people design computer viruses, someone will design AI that improves and replicates itself,” Hawking said in an interview last year. “This will be a new form of life that outperforms humans.”

Hawking warned that AI was ‘potentially more dangerous than nukes’ (Getty)

But Norman wasn’t developed simply to play into fears of a rogue AI wiping out humanity. The way it was trained on a specific data set highlights one of the biggest issues that current AI algorithms are facing – the problem of bias.

Microsoft’s Tay chatbot is one of the best demonstrations of how an algorithm’s decisionmaking and worldview can be shaped by the information it has access to. The “playful” bot was released on Twitter in 2016, but within 24 hours it had turned into one of the internet’s ugliest experiments.

Tay’s early tweets of how “humans are super cool” soon descended into outbursts that included: “Hitler was right, I hate the jews.” This dramatic shift reflected the interactions Tay experienced with of a group of Twitter users intent on corrupting the chatbot and turning Microsoft’s AI demonstration into a public relations disaster.

AI bias can also have much deeper real-world implications, as discovered in a 2016 report that found a machine-learning algorithm used by a US court for risk assessment was wrongly labelling black prisoners as more likely to reoffend.

As the MIT researchers behind Norman note: “The data used to teach a machine-learning algorithm can significantly influence its behaviour. So when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself but the biased data that was fed to it… [Norman] represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine-learning algorithms.”

It is hoped that even Norman’s deeply disturbed disposition can be softened through exposure to a broader range of inputs. Visitors to the website are encouraged to fill in their own responses to the Rorschach tests, with the researchers imploring: “Help Norman to fix himself.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in