Crowdsourcing is the best weapon in fight against fake news – David Rand and Gordon Pennycook

Associate Professor of Management Science and Brain and Cognitive Sciences, MIT Sloan School of Management

From The Hill

The problem of misinformation isn’t new, but it gained widespread attention during the 2016 presidential election when blatantly false stories (“fake news”) spread widely on social media.

Since then, a broad consensus has emerged that we must better understand why people believe and share misinformation and figure out how to stop it.

Limiting the spread of fake news, hyperpartisan content, conspiracy theories and other kinds of misinformation is important for our democracy. It seems likely to decrease the gap between liberals and conservatives about basic facts and to diffuse some of the cross-party animosity that is so prevalent today. Less misinformation may also make it harder for individuals to win elections based on blatantly false claims.

While there has been a lot of scholarly work documenting the spread of misinformation, there has been much less study of possible solutions. And most of the solutions that social media companies have been deploying so far haven’t been very effective; they also have been almost exclusively focused on fake news rather than other kinds of problematic content.

For example, partnering with professional fact-checkers isn’t scalable because they can’t keep up with the rapid creation of false stories, and fact-checkers are sometimes accused of having a liberal bias.

Furthermore, putting warnings on content found to be false can be counterproductive because it makes misleading stories that didn’t get checked seem more accurate — the so-called “implied truth” effect. And nobody wants social media platforms to be deciding themselves what is trustworthy and what material they should censor.

So, we have been working to figure out ways to effectively fight misinformation on social media. In a recent paper, we document one approach that seems surprisingly promising: using crowdsourcing to identify unreliable outlets and then making content from those outlets less likely to appear in the newsfeed.

Our investigation builds off a similar policy proposal made by Facebook last year to have its community determine which sources are trustworthy. While that proposal received widespread scorn, we wanted to see if this type of strategy could actually work.

Read the full post at The Hill.

David Rand is Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team.

Gordon Pennycook is an Assistant Professor of Behavioral Science at University of Regina’s Hill/Levene Schools of Business.

Leave a Reply

Your email address will not be published. Required fields are marked *