Coe Calls Olympics Social Media Survey Results 'disturbing'

Coe calls Olympics social media survey results 'disturbing'

World Athletics president Sebastian Coe described as "disturbing" the results of a study conducted during the Tokyo Olympics to identify and address targeted, abusive messages sent to athletes via social media

Paris, (APP - UrduPoint / Pakistan Point News - 25th Nov, 2021 ) :World Athletics president Sebastian Coe described as "disturbing" the results of a study conducted during the Tokyo Olympics to identify and address targeted, abusive messages sent to athletes via social media.

The survey to gain an understanding of the level of online abuse in athletics drew its findings from a sample of 161 Twitter handles of current and former athletes involved in the Games (derived from a list of 200 athletes selected by World Athletics).

They were tracked during the study period, starting one week prior to the Olympic opening ceremony and concluding the day after the Olympic closing ceremony (July 15 - August 9).

The survey found 23 of the athletes received targeted abuse with 16 of those women -- 115 of the 132 identified abusive posts were directed at female athletes.

Female athletes received 87% of all abuse.

Two athletes -- both black and female -- received 63% of identified abuse.

Unfounded doping accusations made up 25% of abusive messages, while 10% consisted of transphobic (9%) and homophobic (1%) posts.

89% of racist abuse was targeted at US athletes, despite them representing only 23% of the study set.

The two most common categories of abuse were of a sexist (29%) and/or racist (26%) nature, accounting for 55% of all identified abuse.

"This research is disturbing in so many ways," said Coe in a statement.

"What strikes me the most is that the abuse is targeted at individuals who are celebrating and sharing their performances and talent as a way to inspire and motivate people.

"To face the kinds of abuse they have is unfathomable and we all need to do more to stop this.

"Shining a light on the issue is just the first step." In the study timeframe, 240,707 tweets including 23,521 images, GIFs and videos were captured for analysis.

This included text analysis through searches for slurs, offensive images and emojis and other phrases that could indicate abuse.

It also used AI-powered Natural Language Processing to detect threats by understanding the relationship between words (allowing it to determine the difference between "I'll kill you" and "you killed it", for example).