Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Instagram to warn users over ‘bullying’ language in captions

‘We should all consider the impact of our words,’ says anti-cyberbullying charity

Sabrina Barr
Monday 16 December 2019 11:58 GMT
Comments
(iStock)

Instagram is to warn its users when they are using language in their captions that may be perceived as offensive or bullying.

The social media company said it will use artificial intelligence to spot language in captions that could be deemed potentially harmful.

A similar feature, which alerts users when the comments they’re leaving on other people’s posts contain possibly harmful language, was launched earlier this year.

When an Instagram user posts a caption that could be seen as bullying, a message will appear on their screen informing them that their caption looks similar to others that have previously been reported on the platform.

They are then given the option to edit the caption, learn more about why it has been flagged by the feature or to post it as it is.

Earlier this year, the head of Instagram Adam Mosseri published a statement outlining the Facebook-owned firm’s commitment to combatting cyberbullying.

In the statement, Mosseri said the social media platform is “rethinking the whole experience of Instagram” in order to address the issue.

“We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves,” he said.

“It’s our responsibility to create a safe environment on Instagram. This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.”

Instagram has been criticised in the past for failing to take adequate measures to protect its users from online abuse.

In February, the social media company stated it was committed to removing all images related to self-harm on the platform.

Eight months later, Instagram announced plans to extend its ban on self-harm- and suicide-related images to drawings, cartoons and memes.

Dan Raisbeck, co-founder of anti-cyberbullying charity Cybersmile, said the firm’s latest feature is a good example of taking a proactive approach to preventing cyberbullying.

“We should all consider the impact of our words, especially online where comments can be easily misinterpreted,” he said.

“Tools like Instagram’s Comment and Caption Warning are a useful way to encourage that behaviour before something is posted, rather than relying on reactive action to remove a hurtful comment after it’s been seen by others.”

You can contact the National Bullying Helpline on 0845 22 55 787. The helpline is open from 9am to 5pm, Monday to Friday.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in