Crowdsourcing is the best weapon in fight against fake news – David Rand and Gordon Pennycook

Associate Professor of Management Science and Brain and Cognitive Sciences, MIT Sloan School of Management

From The Hill

The problem of misinformation isn’t new, but it gained widespread attention during the 2016 presidential election when blatantly false stories (“fake news”) spread widely on social media.

Since then, a broad consensus has emerged that we must better understand why people believe and share misinformation and figure out how to stop it.

Limiting the spread of fake news, hyperpartisan content, conspiracy theories and other kinds of misinformation is important for our democracy. It seems likely to decrease the gap between liberals and conservatives about basic facts and to diffuse some of the cross-party animosity that is so prevalent today. Less misinformation may also make it harder for individuals to win elections based on blatantly false claims.

While there has been a lot of scholarly work documenting the spread of misinformation, there has been much less study of possible solutions. And most of the solutions that social media companies have been deploying so far haven’t been very effective; they also have been almost exclusively focused on fake news rather than other kinds of problematic content.

Read More »

The baby and the bathwater: free speech and online extremism – Tauhid Zaman

MIT Sloan Assistant Professor Tauhid Zaman

From The Hill

As a Muslim American, I was shocked by the Christchurch mosque shootings in New Zealand that took the lives of 50 completely innocent people and injured scores of others in March. The tragedy was made more sickening by the fact that the alleged gunman, reportedly a white supremacist, live-streamed the first attack on Facebook Live.

The fact that the attack happened during the Muslim Friday prayer, a religious service I attend regularly myself, left me deeply shaken and heartbroken. Besides privately grieving for those who perished in Christchurch, I also attended public rallies to show solidarity with the victims in the aftermath of the carnage in New Zealand.

But there’s something you won’t see me doing: Calling for a crackdown on what some deem offensive speech on social media — and a crackdown on what some consider extremist right-wing speech on social media in particular, as many have called for in the wake of the tragic Christchurch mosque shootings.

I know this may surprise some people. As I said, as a Muslim, I was particularly outraged by the slaughter in New Zealand. But I’m not outraged to the extent that I believe that:

  • Censorship of speech, whether by governments or near-monopolistic social media corporations, is an appropriate response to extremism; and
  • censorship of speech is an effective way to combat extremism.

To be clear: I applaud the swift action of Facebook, Twitter, YouTube and other social-media sites to take down videos of the Christchurch shootings. Indeed, Facebook reported it removed or blocked 1.5 million videos in the first 24 hours after the attacks.

Read More »

Even a few bots can shift public opinion in big ways – Tauhid Zaman

MIT Sloan Associate Professor Tauhid Zaman

From The Conversation

Nearly two-thirds of the social media bots with political activity on Twitter before the 2016 U.S. presidential election supported Donald Trump. But all those Trump bots were far less effective at shifting people’s opinions than the smaller proportion of bots backing Hillary Clinton. As my recent research shows, a small number of highly active bots can significantly change people’s political opinions. The main factor was not how many bots there were – but rather, how many tweets each set of bots issued.

My work focuses on military and national security aspects of social networks, so naturally I was intrigued by concerns that bots might affect the outcome of the upcoming 2018 midterm elections. I began investigating what exactly bots did in 2016. There was plenty of rhetoric– but only one basic factual principle: If information warfare efforts using bots had succeeded, then voters’ opinions would have shifted.

I wanted to measure how much bots were – or weren’t – responsible for changes in humans’ political views. I had to find a way to identify social media bots and evaluate their activity. Then I needed to measure the opinions of social media users. Lastly, I had to find a way to estimate what those people’s opinions would have been if the bots had never existed.

Read More »

Detecting customer-to-customer trends (without social media data) to optimize promotions – Georgia Perakis

MIT Sloan Prof. Georgia Perakis

From Huffington Post

Every year, there are a few items of clothing that become hot. For example, last fall, a Zara coat seemed to become a “must have” item. The coat even had its own Instagrampage with more than 8,000 followers. Many factors contribute to this phenomenon like celebrities — and people with large social media followings — wearing the “hot” item.

When we have detailed social media data, it is relatively easy to identify patterns of influence to predict these trends. But what happens when we don’t have social media data? After all, social media platforms charge tremendous fees for access to that information. Can we use traditional data to detect underlying trends between groups of consumers and improve demand estimation? If so, can we use that information to optimize personalized promotions to increase profits, and also to present “the right individual with the right item at the right price?”

In a recent study, I looked at these questions with MIT Operations Research Center PhD students Lennart Baardman and Tamar Cohen and collaborators from Oracle Retail. We found that the answer to both questions is: yes. We began our study by building a customer demand model and algorithm that incorporates customer-to-customer trends or influences. We then applied the information about customer demand to make promotion decisions. With this method, profits increased between 5-12%. The model can be used by any retailer of any size for any product.

Read More »

How Lies Spread Online – Sinan Aral

From The New York Times

The spread of misinformation on social media is an alarming phenomenon that scientists have yet to fully understand. While the data show that false claims are increasing online, most studies have analyzed only small samples or the spread of individual fake stories.

My colleagues Soroush Vosoughi, Deb Roy and I set out to change that. We recently analyzed the diffusion of all of the major true and false stories that spread on Twitter from its inception in 2006 to 2017. Our data included approximately 126,000 Twitter “cascades” (unbroken chains of retweets with a common, singular origin) involving stories spread by three million people more than four and a half million times.

Disturbingly, we found that false stories spread significantly more than did true ones. Our findings were published on Thursday in the journal Science.

We started by identifying thousands of true and false stories, using information from six independent fact-checking organizations, including Snopes, PolitiFact and Factcheck.org. These organizations exhibited considerable agreement — between 95 percent and 98 percent — on the truth or falsity of these stories. Read More »