Online Abuse Will Look Very Different During The 2024 Olympic Games. Here's Why

We spoke to Kirsty Burrows about how the team are treating trolls this year.
|
Open Image Modal
NurPhoto via NurPhoto via Getty Images

Prior to her 2012 Olympic run, British pole vaulter Holly Bradshaw had jumped to third in the world. Before she’d turned 20, she’d won a World Championships bronze medal; by 2020, she became the first pole vaulter ever to win an Olympic medal for Team GB. 

But in 2021, she wrote, “At one point, if you Googled my name, one of the top links just said ‘Holly Bleasdale [her maiden name] fat.’”

The results, she suggests, may have been partly driven by an onslaught of online comments about her weight.

Trolling directed towards athletes is nothing new. In 2021, footballer Thierry Henry left social media because “The sheer volume of racism, bullying and resulting mental torture to individuals is too toxic to ignore”; he’s far from alone

What is new, however, is how the 2024 Olympics team plans to deal with these kinds of comments. 

HuffPost UK recently spoke to Kirsty Burrows, head of the Safe Sport Unit at the International Olympic Committee, about how the team plans to use AI to better safeguard athletes from online abuse this year. Here’s what we found out:

This year’s online protection is more than a filter

Though social media platforms can already flagcertain words and phrases, the 2024 Olympics’ tool is “not a filter,” Burrows explained. 

Instead, the 35+-language-speaking “AI will scan millions of data points and, using natural language understanding, identify where there might be targeted or fixated abusive content.” 

After a first scan, “there’s a proprietary threat algorithm which is applied on top of that [data]... the AI will then filter it based on the typology of violence that is being detected, and then it goes through a human triage.” Abuse is sorted into red, amber, and green (grey area) content warnings. 

Comments which violate rules are then removed after triage, though the athletes may see them if they want to; in fact, they can opt out of the AI tool altogether if they like.

The team also hope to be able to learn more about the typologies of online violence, so they can better tailor their online protection to different groups in different sports at different times.

The tool is far faster at removing online hate than previous methods

“If you were to read each post one second per post, it would take you 16 years to go through and if we take that four per cent industry average of [posts being] online abuse, we’re looking at 20-40 million posts that potentially [violate guildines]. And again we’re just talking about posts here; [we’re] not even talking about comments as well,” Burrows explained. 

The tool needs less individual approval than we currently use to get through that mass of data as it “only scans open-source data, and it’s only detecting sort of targeted violence, and it’s only acting on violence which breaches community guidelines or is potentially criminal in nature.” That speeds the vetting process up considerably.

In fact, though it doesn’t yet delete abuse at the point of going live, comments which violate guidelines are often deleted so quickly with the new system that Burrows says athletes never get the chance to read it.

“What we’ve done is to find the best solution that we can, which it may not be... instantaneous, but it is extremely quick. And what we’ve heard from others that have worked with the same service provider is that on average, all the red-level abuse is removed around 90% of the time before the athlete has a chance to see it,” she explained.

“What we know about online violence is that it’s really harmful. It causes real embodied harm,” Burrows added. 

The Olympics are also offering a dedicated mental health helpline for four years following the 2024 games, and will introduce the games’ Athlete365 Mind Zone with trained mental health staff this year.

The tool may be bigger than just the Games

At the moment, the onus is partly on athletes to report abuse. That means reading, and possibly being hurt by, cruel comments to keep their page vitriol-free.

But Burrows hopes the new Olympic tool can help to redirect the burden of that responsibility, both in and outside of the games.

“I think this has a big much broader implication,” she told HuffPost UK, “because it’s really saying ― it’s starting to see real-world action for online harm and saying that you can’t act with impunity online. You cannot commit violence against somebody who’s not doing anything.” 

“The type of violence that we see [online] is just shocking... and so if there’s something that we can do to avoid that, and then also to hopefully utilise this to push for stronger measures and to better develop data-driven policies to more broadly flag this issue, then that for us is really, really good,” she added. 

70% of Olympians only go to the games once, Burrows points out. It’s her hope that this year’s athletes don’t have to spend a second of that experience worried about online abuse.