Illustration: Laetitia Géraud
An article by Agence Science-Presse (www.sciencepresse.qc.ca)
Does a reduction of regulation and surveillance on social media platforms automatically lead to an explosion of hateful, toxic statements? That’s certainly what we’ve seen on a host of far-right platforms in recent years. It remains to be seen if that’s what’s going to happen to the platform formerly known as Twitter, X.
Many users banned from Facebook or X have ended up on Gab or Truth Social where, according to Gialuca Stringhini of Boston University, “Their on-line activity has tended to become more extreme.” The only good side is that the community they’re communicating with becomes much smaller.
Can the shift that Elon Musk is putting X through change things? From the very first week following his acquisition of the platform, preliminary analysis noted a multiplication of racist and anti-Semitic insults. Worries have mounted in the time since, with many formerly banned Twitter participants reinstated.
Normally, extremist postings are made on more marginal platforms and remain contained there. It’s when they end up on Facebook or the former Twitter that they gain in popularity, because that’s where journalists discover them. Formerly, Twitter’s internal policy limited hate speech and disinformation on COVID, reducing the chances that these messages gained access to a large public.
Is it possible that X, which has lost a lot of followers, will end up itself in the ranks of marginalized extremist platforms? That’s the hypothesis forwarded by terrorism expert James Piazza of Penn State University in a report in Nature magazine. He observes that these hate-based communities degenerate to the point that they’re no longer usable, submerged by bots, pornography and disgraceful content.
For the moment, disinformation experts are reduced to speculating on what will happen. But numerous researchers are preparing protocols to compare the “before Musk” and “after Musk” eras. The want to work out whether there are ways to reduce false information and cyber-harassment.
While waiting for this, those who study the impact of social media on ethnic violence are worried, according to Nature. When several intervenors incite the public to commit crimes, these crimes end up being committed, argues Felix Ndahinda, who studies online hate as it applies to the Republic of Congo’s long armed conflicts.
Most hate speech escapes the notice of the moderators of X because it’s phrased in languages that aren’t well monitored. This reality can only help the opportunist hate-mongers attract followers – because even though X is less popular than Facebook, it is still way more used than extreme-right platforms. “This will encourage these intervenors and increase the virulence of their discourse,” Ndahinda says.
Link to the original article:
https://www.sciencepresse.qc.ca/actualite/2022/11/28/medias-sociaux-moins-reglementes-plus-toxiques
Leave a Reply