How companies can create a cybersafe culture at work – Stuart Madnick

Stuart Madnick, MIT Sloan Prof. of Information Technology

From The Wall Street Journal

As technical defenses against cyberattacks have improved, attackers have adapted by zeroing in on the weakest link: people. And too many companies are making it easy for the attackers to succeed.

An analogy that I often use is this: You can get a stronger lock for your door, but if you are still leaving the key under your mat, are you really any more secure?

It isn’t as if people aren’t aware of the weapons hackers are using. For instance, most people have heard of, and probably experienced, phishing—emails or messages asking you to take some action. (“We are your IT dept. and want to help you protect your computer. Click on this link for more information.”) Although crude, these tactics still achieve a 1% to 3% success rate.

Then there are the more deadly, personalized “spearphish” attacks. One example is an email, apparently sent from a CEO to the CFO, that starts by mentioning things they discussed at dinner last week and requests that money be transferred immediately for a new high-priority project. These attacks are increasingly popular because they have a high success rate.

The common element of all these kinds of attacks: They rely on people falling for them. Read More »

Retailers need to get real about security – Lou Shipley

MIT Sloan Lecturer Lou Shipley

MIT Sloan Lecturer Lou Shipley

From Xconomy

It seems a distant memory now. In December 2013 – light years ago in technology time – the retail giant Target disclosed a massive software security breach of its point of sale systems. The bad guys fled the virtual premises with the credit card information of 40 million customers. This astounding number would later rise to 70 million customers.

Target’s embarrassment, its loss of market share, its brand erosion, and its legal costs to settle claims collectively should have served as a nerve-jangling wakeup call for retailers large and small nationwide.

It would be hopeful to believe that retailers learned from Target’s data breach, but in fact the opposite has happened. In 2016, retail software security breaches were up 40 percent over the prior year and in 2017 the following familiar brand names suffered breaches – Sonic, Whole Foods Market, Arby’s, Saks Fifth Avenue, Chipotle, Brooks Brothers, Kmart, and Verizon. Retail software security is getting worse, not better, and the dismal trend seems likely to continue in the near term. Why? Read More »

Is “murder by machine learning” the new “death by PowerPoint?” – Michael Schrage

MIT Center for Digital Business Research Fellow Michael Schrage

From Harvard Business Review 

Software doesn’t always end up being the productivity panacea that it promises to be. As its victims know all too well, “death by PowerPoint,” the poor use of the presentation software, sucks the life and energy out of far too many meetings. And audit after enterprise audit reveals spreadsheets rife with errors and macro miscalculations. Email and chat facilitate similar dysfunction; inbox overload demonstrably hurts managerial performance and morale. No surprises here — this is sadly a global reality that we’re all too familiar with.

So what makes artificial intelligence/machine learning (AI/ML) champions confident that their technologies will be immune to comparably counterproductive outcomes? They shouldn’t be so sure. Digital empowerment all too frequently leads to organizational mismanagement and abuse. The enterprise history of personal productivity tools offers plenty of unhappy litanies of unintended consequences. For too many managers, the technology’s costs often rival its benefits.

It’s precisely because machine learning and artificial intelligence platforms are supposed to be “smart” that they pose uniquely challenging organizational risks. They are likelier to inspire false and/or misplaced confidence in their findings; to amplify or further entrench data-based biases; and to reinforce — or even exacerbate — the very human flaws of the people who deploy them.

The problem is not that these innovative technologies don’t work; it’s that users will inadvertently make choices and take chances that undermine colleagues and customers. Ostensibly smarter software could perversely convert yesterday’s “death by Powerpoint” into tomorrow’s “murder by machine learning.” Nobody wants to produce boring presentations that waste everybody’s time, but they do; nobody wants to train machine learning algorithms that produce misleading predictions, but they will. The intelligent networks to counter-productivity hell are wired with good intentions. Read More »

La seguridad con la IA – Tauhid Zaman

Tauhid Zaman, Associate Professor, Operations Mangement

From Computerworld Colombia 

¿ Creen que sería genial si un computador lograra identificar un criminal antes de cometer un crimen? Ese es el objetivo del Machine Learning, que está convirtiéndose en una herramienta popular en la prevención del crimen.

Por medio del análisis de datos como edad, género y las condenas anteriores, los computadores pueden predecir si es probable que alguien cometa un delito. Si usted es un juez y está decidiendo si conceder una fianza o enviar a alguien a la cárcel, esa información puede ser bastante útil. El problema es que el aprendizaje automático también puede ser extremadamente peligroso porque, si se confía por completo en él, puede mantener a una persona inocente tras las rejas.

En un estudio reciente, analizamos si el aprendizaje automático podría aplicarse para identificar terroristas en las redes sociales. Utilizando datos de varios cientos de miles de cuentas de extremistas en Twitter, desarrollamos un modelo de comportamiento para usuarios que podría predecir si las cuentas nuevas también estaban conectadas al ISIS. Si bien el modelo podría atrapar a muchos extremistas, también vimos cómo el aprendizaje automático es susceptible de dos errores comunes: primero, el algoritmo puede mostrar falsos positivos al identificar erróneamente a alguien como un terrorista. En segundo lugar, puede mostrar falsos negativos, al errar la identificación de verdaderos terroristas.

Read More »

Tech innovators open the digital economy to job seekers, financially underserved – Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

From The Wall Street Journal

The future of work is a prime interest of the MIT Initiative on the Digital Economy, started in 2013 by researchers Erik Brynjolfsson and Andy McAfee. To help come up with answers to questions about the impact of automation on jobs and the effects of digital innovation, the group launched the MIT Inclusive Innovation Challenge last year, inviting organizations around the world to compete in the realm of improving the economic opportunities of middle- and base-level workers.

 More than $1 million in prizes went to winners of the 2017 competition in Boston last month in four categories: Job creation and income growth, skills development and matching, technology access, and financial inclusion. Awards were funded with support from Google.org, The Joyce Foundation, software firm ISN, and ISN President and CEO Joseph Eastin.

Read More »