Comments posted on Facebook about killing Donald Trump are banned by the social networking site - though violent threats against other people are often allowed to remain untouched, an investigation based on leaked guidelines has claimed.
A dossier apparently containing dozens of training manuals and internal documents obtained by the Guardian newspaper claims to offer an insight into how content posted by Facebook’s users is moderated.
It shows “credible violence” such as posting the phrase “someone shoot Trump” must be removed by the staff because he is a head of state.
However, generic posts stating someone should die are permitted as they are not regarded as credible threats, the newspaper claims.
Staff are told videos of abortions are allowed to remain on Facebook as long as they do not contain nudity, while footage of violent deaths do not have to be deleted because they can help create awareness of issues such as mental illness, the Guardian said.
All “handmade” art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not, the newspaper claimed.
Facebook will also allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”, it added.
The leak is likely to reignite the debate between freedom of expression, safety and censorship on the internet.
Last week Theresa May outlined plans for widespread reform of cyberspace.
She said the internet had brought “a wealth of opportunity, but also significant new risks which have evolved faster than society’s response to them”.
Outlining plans under a future Tory government, she said: “We want social media companies to do more to help redress the balance and will take action to make sure they do.
“These measures will help make Britain the best place in the world to start and run a digital business, and the safest place in the world for people to be online.”
Under the plans, social media firms will have to take action to stop search terms directing users to inappropriate sites.
In March tech giants Facebook, Google, Twitter and Microsoft pledged to join forces to tackle extremist content on their platforms.
Facebook has come under fire for allegedly failing to remove sexualised pictures of children from its website after the BBC said it used Facebook’s “report button” to flag up 100 photos on the website but 82 were not removed.
The images included under-16s in sexualised poses, pages aimed at paedophiles and an image appearing to be taken from a child abuse video.
Facebook monthly users jumped to more than 1.86 billion, according to figures released at the turn of the year.
Monika Bickert, head of global policy management at Facebook, said: “Keeping people on Facebook safe is the most important thing we do.
“(Founder) Mark Zuckerberg recently announced that over the next year, we’ll be adding 3,000 people to our community operations team around the world - on top of the 4,500 we have today - to review the millions of reports we get every week, and improve the process for doing it quickly.
“In addition to investing in more people, we’re also building better tools to keep our community safe.
“We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”
The contents of the dossier were described by children’s charity the NSPCC as “alarming to say the least”.
A spokesman said: “It (Facebook) needs to do more than hire an extra 3,000 moderators.
“Facebook, and other social media companies, need to be independently regulated and fined when they fail to keep children safe.”