People would rather spread juicy lies rather than the truth, according to new research from the Massachusetts Institute of Technology (MIT).
Last week, in a writeup of the research, Science reported that claims that are demonstrably false – as in, tweets related to news that had been investigated by six independent fact-checking organizations, including PolitiFact, Snopes and FactCheck.org – are 70% more likely to be retweeted. Bogus claims about politics spread further than any other category of news included in their analysis.
Must be those meddlesome bots, eh? That’s what the researchers preliminarily assumed. But it turned out that it was humans, relishing new (false) information that they hadn’t seen before. The team arrived at its conclusion by using bot-detection technology to weed out social media shares generated by bots.
Even without the busybody bots, fake news still spread at about the same rate and to the same number of people. Specifically, the researchers had found that truth rarely reached more than 1000 Twitter users. The most outlandish fake news, on the other hand, routinely reached well over 10,000 people.
One example was the false reports about the boxer Floyd Mayweather wearing a Muslim headscarf and challenging people to fight him at a Donald Trump rally during the 2016 US presidential election. It originated on a sports comedy website, catching fire as people took it seriously. Fairy tales such as the Mayweather concoction routinely reach over 10,000 Twitter users.
Soroush Vosoughi, a data scientist at MIT, told Science that it was the viral posts after the Boston Marathon bombings – posts that spread rumors about a missing Brown University student thought to be a bombing suspect (he later turned out to have committed suicide for reasons unrelated to the bombing) – that really brought home to him what an effect fake news can have on real lives.
[That’s when I realized] that these rumors aren’t just fun things on Twitter, they really can have effects on people’s lives and hurt them really badly.
If we can’t blame bots for fake news going viral, his team thought, perhaps it has to do with how many followers a disseminating account has?
Nope: people who spread fake news actually have fewer followers, not more.
That left the content of the tweets themselves. What the researchers found was that tweets with false information were refreshingly novel: they had new information that a Twitter user hadn’t seen before, making them feel fresher than true news stories. The fake news tweets were also far more emotionally provocative, eliciting more surprise and disgust in their comments.
Science quotes Alex Kasprak, a fact-checking journalist at Snopes:
If something sounds crazy stupid you wouldn’t think it would get that much traction. But those are the ones that go massively viral.
Unfortunately, crazy stupid can become crazy dangerous. In June 2017, a 29-year-old man who fired a military-style assault rifle inside a popular Washington pizzeria, wrongly believing he was saving children trapped in a sex-slave ring, was sentenced to four years in prison.
The judge said at the time that it was “sheer luck” that Edgar Maddison Welch didn’t kill anybody.
That was a case study in how fake news gets onto Twitter in the first place. It started with hacked emails on WikiLeaks… which got scoured for political wrongdoing in the Clinton campaign staff by a popular Reddit forum dedicated to Donald Trump and 4chan’s far-right fringe message board… and which wound up confabulated into “PizzaGate” by somebody on 4chan who connected the phrase “cheese pizza” to pedophiles, who use the initials “c.p.” to denote child pornography on chat boards.
Thanks to a recent study from the University of Alabama at Birmingham, Cyprus University of Technology, University College London and Telefonica Research, we have a better understanding of how tightly knit, highly active fringe communities on sites such as 4chan and Reddit are an important part of our current news ecosystem and often succeed in spreading alternative news to mainstream social networks such as Twitter and on out to the greater web.
One takeaway from these studies: if we’re getting our news from Twitter, we should bring a healthy dose of skepticism to the table. At the rate it’s going, fake news is elbowing out the truth, and we don’t even have bots to blame: just our own, very human hunger for something new.
Source : Naked Security