Mark Zuckerberg has announced that Facebook is in the process of developing AI software that will allow quicker identification of terrorists, suicide risks and bullying online.
The algorithm, which was announced in a 5000-word letter from the social network founder, is going to take “many years” to develop as software engineers teach it how to flag content.
Currently it is in the early stages of development, being taught to look at photos and videos for risk factors, and is already responsible over 30% of content that goes on to be reviewed by the team.
Zuckerberg highlighted the growing use of the platform by terrorist organisations, saying: “Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization.
As well as the problem of fake news growing on the site: “This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide.”
Zuckerberg admits that this “better approach” to platform monitoring is, at least in part, inspired by mistakes the social platform has made in the past about deciding what to remove.
“There have been terribly tragic events - like suicides, some live streamed - that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out,” said Zuckerberg.
Facebook has also come under fire for its response to “sexually explicit” images that reportedly showed inappropriate levels of nudity, such as the iconic photo of a Vietnamese girl fleeing a Napalm attack, or a sixteenth-century Neptune statue.
Facebook has already developed tools to help people expressing suicidal thoughts: “When someone is thinking of committing suicide or hurting themselves, we’ve built infrastructure to give their friends and community tools that could save their life.
“And when a child goes missing, we’ve built infrastructure to show Amber Alerts - and multiple children have been rescued without harm.”
But there is still a long way to go.