Social media executives could face prison time in Australia if their platforms fail to “quickly” remove streamed violent footage such as the New Zealand Mosque shootings.
Australia’s parliament passed legislation just weeks after the March 15 attacks in Christchurch, in which an Australian white supremacist apparently used a camera to broadcast live on Facebook as he shot worshippers in two mosques.
The bills were rushed through just before elections expected in May.
Attorney General Christian Porter told parliament: “Together we must act to ensure that perpetrators and their accomplices cannot leverage online platforms for the purpose of spreading their violent and extreme propaganda - these platforms should not be weaponised for evil.”
The law makes it a crime for social media platforms not to remove “abhorrent violent material” quickly. Bosses could face three years in prison and a fine of 10.5 million Australian dollars (£5.6 million), or 10% of the platform’s annual turnover – whichever is larger.
Abhorrent violent material is defined as acts of terrorism, murder, attempted murder, torture, rape and kidnapping. The material must be recorded by the perpetrator or an accomplice for the law to apply.
Platforms anywhere in the world would face fines of up to 840,000 dollars (£450,500) if they fail to notify Australian Federal Police if they are aware their service was streaming such material occurring in Australia.
Critics warn that some of the most restrictive laws about online communication in the democratic world could have unforeseen consequences, including media censorship and reduced investment in Australia.
The Digital Industry Group - an association representing the digital industry in Australia including Facebook, Google and Twitter - said taking down abhorrent content was a “highly complex problem” that required consultation with a range of experts, which the government had not done.
“This law, which was conceived and passed in five days without any meaningful consultation, does nothing to address hate speech, which was the fundamental motivation for the tragic Christchurch terrorist attacks,” the group’s managing director Sunita Bose said.
“This creates a strict internet intermediary liability regime that is out of step with the notice-and-takedown regimes in Europe and the United States, and is therefore bad for internet users as it encourages companies to proactively surveil the vast volumes of user-generated content being uploaded at any given minute,” she added.
It comes as Facebook boss Mark Zuckerberg assured senior Irish politicians that he will work with governments to establish new policies to regulate social media.
This week, he told the Irish broadcaster RTE: “Either way we’re going to have responsivity for making sure that we can police harmful content and gets it off our services,” Zuckerberg told RTE News.
“I think these days a lot of people don’t want tech companies or any private companies to be making so many decisions about what speech is acceptable and what this harmful content that needs to be gets taken down.
“So I think there is a role for a broader public debate here and I think some of these things would benefit from a more democratic process and a more active government role.”
Twitter CEO Jack Dorsey earlier this week said he saw his platform as an “educator, helping regulators and legislators understand what’s happening with technology.”
British politicians have accelerated calls for more social media regulation, including Chancellor Philip Hammond, who announced in his Spring Statement that the competition watchdog will examine the tech firms’ dominance over the £14 billion digital advertising market.
Meanwhile, parliament’s science and technology committee published a report calling for social media companies to be legally required to protect the health and wellbeing of their users.