You can prevent a ‘Panama Papers’ scandal at your law firm — Lou Shipley

MIT Sloan Lecturer Lou Shipley

MIT Sloan Lecturer Lou Shipley

From Huffington Post

The data breach at the law firm of Mossack Fonseca in Panama sent shock waves around the world recently with the prime minister of Iceland stepping aside, Swiss authorities raiding the headquarters of the Union of European Football Associations, and relatives of the president of China linked to offshore companies. The size of the breach was also shocking with 2.6 terabytes of data leaked. That’s 30 times bigger than the WikiLeaks release or the Edward Snowden materials. However, the most shocking part of the “Panama Papers” story is that the breach and exploit of the popular open source project Drupal was totally preventable.

Everyone knows that law firms manage large amounts of highly sensitive information. Whether the data involves an individual’s estate plan, a startup’s patent application, or a high-profile merger and acquisition, clients expect their information to be secure. Indeed, lawyers are required to keep this information both confidential and secure. Yet, despite the very high level of security owed this information, many firms lack an IT staff and outsource the creation and maintenance of their data management and security services. Once outsourced, there is an assumption that someone else will effectively manage the data and ensure its security.

This is many firms’ first mistake. Even if they aren’t managing their own IT, law firms still have an obligation to make sure that data is properly secured. This means asking frequent questions about security and ensuring that the vendor is implementing reasonable security measures.

Read More »

Twitter Chat de MIT Sloan Experts: #MITBigDataLatAm – Lee Ullmann y Jorge Hernán Peláez

Lee Ullmann, Director of the MIT Sloan Latin America Office Office of International Programs

Lee Ullmann, Director of the MIT Sloan Latin America Office
Office of International Programs

¿Cuál es la importancia de Big Data para Latinoamérica y cuál es su futuro en la región?

Únanse para una conversación entre Lee Ullmann (@MITSloanLatAm), director de la Oficina para América Latina de MIT Sloan, y Jorge Hernán Peláez (@jhpelaez), reportero colombiano para La W, en donde platicaremos sobre cómo los datos masivos pueden contribuir significativamente tanto a las empresas como a los gobiernos.

La plática por Twitter tendrá lugar el 11 de mayo desde las 3:00 hasta las 4:00 p.m. ET.

¿Cómo pueden participar? ¡Es sencillo! Si tienen una pregunta, respuesta o comentario, simplemente incluyan #MITBigDataLatAm en sus Tweets.

La conversación en Twitter es un precursor a la conferencia “Big Data: Shaping the Future of Latin America” (Big Data: Cómo dar forma al futuro de América latina), organizada por la escuela de negocios MIT Sloan el 26 de mayo en Bogotá, Colombia. La conferencia reunirá a profesores internacionalmente renombrados para discutir como se puede usar Big Data para formar decisiones mejor informadas.

En promoción de las ideas de la conferencia, tendremos una conversación en Twitter sobre el papel de Big Data para el futuro de Latinoamérica y más ideas de la conferencia.

Learning how to make a real difference with big data in Latin America – Lee Ullmann

Lee Ullmann, Director of the MIT Sloan Latin America Office Office of International Programs

Lee Ullmann, Director of the MIT Sloan Latin America Office
Office of International Programs

Big data is a popular buzz word these days. Companies are told they should harness the vast amount of data produced globally and it will lead to greater profitability and productivity. By using big data, they can reap benefits like producing better products and customization options. That’s all well and good, but it’s contingent on managers understanding how to use and analyze the data. How many can really do that across all industries?

A McKinsey Quarterly report in 2015 found that very few legacy companies have achieved “big impact” through big data. In the study, participants were asked what degree of revenue or cost improvement they had seen through use of big data. The answer was less than 1 percent for the majority of the respondents.

A big problem with big data is that, although everyone talks about it, most people don’t really know what to do to ensure that investing in it is a win-win proposition. To shed light on this issue, MIT Sloan is bringing its deep expertise to a May 26 conference in Bogotá, Colombia called, “Big Data: Shaping the Future of Latin America.” The presenters include faculty from across the MIT campus as well as the Department of National Planning in Colombia. With examples from their own research, they will share new and innovative ways to use big data to achieve specific goals.

Read More »

Run field experiments to make sense of your big data — Duncan Simester

MIT Sloan Prof. Duncan Simester

MIT Sloan Prof. Duncan Simester

From Harvard Business Review

Making marketing decisions based on an analysis of Big Data can be risky if not done properly, because data seldom reveal the causal links between correlated events. Take the case of one large retailer we studied. The company noticed that customers who purchased perishables also tended to purchase large-screen TVs. Based on this observation, the company made a significant investment in marketing activities directed at increasing purchases of perishables, in the hope that this would trigger more TV purchases. But while they sold more perishables, they didn’t manage to shift any more TVs, and the profits from selling extra perishables weren’t enough to cover the marketing investment.

Read More »

How big data can be used to improve early detection of cognitive disease — Cynthia Rudin

MIT Sloan Asst. Prof. Cynthia Rudin

MIT Sloan Asst. Prof. Cynthia Rudin

From The Health Care Blog

The aging of populations worldwide is leading to many healthcare challenges, such as an increase in dementia patients. One recent estimate suggests that 13.9% of people above age 70 currently suffer from some form of dementia like Alzheimer’s or dementia associated with Parkinson’s disease. The Alzheimer’s Association predicts that by 2050, 135 million people globally will suffer from Alzheimer’s disease.

While these are daunting numbers, some forms of cognitive diseases can be slowed if caught early enough. The key is early detection. In a recent study, my colleague and I found that machine learning can offer significantly better tools for early detection than what is traditionally used by physicians.

One of the more common traditional methods for screening and diagnosing cognitive decline is called the Clock Drawing Test. Used for over 50 years, this well-accepted tool asks subjects to draw a clock on a blank sheet of paper showing a specified time. Then they are asked to copy a pre-drawn clock showing that time. This paper and pencil test is quick and easy to administer, noninvasive, and inexpensive. However, the results are based on the subjective judgment of clinicians who score the tests. For instance, doctors must determine whether the clock circle has “only minor distortion” and whether the hour hand is “clearly shorter” than the minute hand.

In our study, we created an improved version of this test using big data and machine learning. For the past seven years, a group of neuropsychologists have had patients use a digital pen to draw the clocks instead of a pencil, accumulating more than 3,400 tests in that time. The pen functions as an ordinary ballpoint, but it also records its position on the page with considerable spatial and temporal accuracy.  We applied machine learning algorithms to this body of data, constructing a data-driven diagnostic tool. So rather than having doctors subjectively analyze the pencil-drawn clocks, the data from the digital pen drawings goes into the machine learning algorithm’s model which provides the result of the test.

Read the full post at The Health Care Blog.

Cynthia Rudin is an Associate Professor of Statistics at the MIT Sloan School of Management in Operations Research and Statistics.