From The New York Times
The spread of misinformation on social media is an alarming phenomenon that scientists have yet to fully understand. While the data show that false claims are increasing online, most studies have analyzed only small samples or the spread of individual fake stories.
My colleagues Soroush Vosoughi, Deb Roy and I set out to change that. We recently analyzed the diffusion of all of the major true and false stories that spread on Twitter from its inception in 2006 to 2017. Our data included approximately 126,000 Twitter “cascades” (unbroken chains of retweets with a common, singular origin) involving stories spread by three million people more than four and a half million times.
Disturbingly, we found that false stories spread significantly more than did true ones. Our findings were published on Thursday in the journal Science.
We started by identifying thousands of true and false stories, using information from six independent fact-checking organizations, including Snopes, PolitiFact and Factcheck.org. These organizations exhibited considerable agreement — between 95 percent and 98 percent — on the truth or falsity of these stories.
Then we searched Twitter for mentions of these stories, followed the sharing activity to the “origin” tweets (the first mention of a story on Twitter) and traced all the retweet cascades from every origin tweet. We then analyzed how they spread online.
For all categories of information — politics, entertainment, business and so on — we found that false stories spread significantly farther, faster and more broadly than did true ones. Falsehoods were 70 percent more likely to be retweeted, even when controlling for the age of the original tweeter’s account, its activity level, the number of its followers and followees, and whether Twitter had verified the account as genuine. These effects were more pronounced for false political stories than for any other type of false news.
Surprisingly, Twitter users who spread false stories had, on average, significantly fewer followers, followed significantly fewer people, were significantly less active on Twitter, were verified as genuine by Twitter significantly less often and had been on Twitter for significantly less time than were Twitter users who spread true stories. Falsehood diffused farther and faster despite these seeming shortcomings.
And despite concerns about the role of web robots in spreading false stories, we found that human behavior contributed more to the differential spread of truth and falsity than bots did. Using established bot-detection algorithms, we found that bots accelerated the spread of true stories at approximately the same rate as they accelerated the spread of false stories, implying that false stories spread more than true ones as a result of human activity.
Why would that be? One explanation is novelty. Perhaps the novelty of false stories attracts human attention and encourages sharing, conveying status on sharers who seem more “in the know.”
Read the full post at The New York Times
Sinan Aral is the David Austin Professor of Management at MIT, where he is a Professor of IT & Marketing, and Professor in the Institute for Data, Systems and Society where he co-leads MIT’s Initiative on the Digital Economy.