Is “murder by machine learning” the new “death by PowerPoint?” – Michael Schrage

MIT Center for Digital Business Research Fellow Michael Schrage

From Harvard Business Review 

Software doesn’t always end up being the productivity panacea that it promises to be. As its victims know all too well, “death by PowerPoint,” the poor use of the presentation software, sucks the life and energy out of far too many meetings. And audit after enterprise audit reveals spreadsheets rife with errors and macro miscalculations. Email and chat facilitate similar dysfunction; inbox overload demonstrably hurts managerial performance and morale. No surprises here — this is sadly a global reality that we’re all too familiar with.

So what makes artificial intelligence/machine learning (AI/ML) champions confident that their technologies will be immune to comparably counterproductive outcomes? They shouldn’t be so sure. Digital empowerment all too frequently leads to organizational mismanagement and abuse. The enterprise history of personal productivity tools offers plenty of unhappy litanies of unintended consequences. For too many managers, the technology’s costs often rival its benefits.

It’s precisely because machine learning and artificial intelligence platforms are supposed to be “smart” that they pose uniquely challenging organizational risks. They are likelier to inspire false and/or misplaced confidence in their findings; to amplify or further entrench data-based biases; and to reinforce — or even exacerbate — the very human flaws of the people who deploy them.

The problem is not that these innovative technologies don’t work; it’s that users will inadvertently make choices and take chances that undermine colleagues and customers. Ostensibly smarter software could perversely convert yesterday’s “death by Powerpoint” into tomorrow’s “murder by machine learning.” Nobody wants to produce boring presentations that waste everybody’s time, but they do; nobody wants to train machine learning algorithms that produce misleading predictions, but they will. The intelligent networks to counter-productivity hell are wired with good intentions. Read More »

La seguridad con la IA – Tauhid Zaman

Tauhid Zaman, Associate Professor, Operations Mangement

From Computerworld Colombia 

¿ Creen que sería genial si un computador lograra identificar un criminal antes de cometer un crimen? Ese es el objetivo del Machine Learning, que está convirtiéndose en una herramienta popular en la prevención del crimen.

Por medio del análisis de datos como edad, género y las condenas anteriores, los computadores pueden predecir si es probable que alguien cometa un delito. Si usted es un juez y está decidiendo si conceder una fianza o enviar a alguien a la cárcel, esa información puede ser bastante útil. El problema es que el aprendizaje automático también puede ser extremadamente peligroso porque, si se confía por completo en él, puede mantener a una persona inocente tras las rejas.

En un estudio reciente, analizamos si el aprendizaje automático podría aplicarse para identificar terroristas en las redes sociales. Utilizando datos de varios cientos de miles de cuentas de extremistas en Twitter, desarrollamos un modelo de comportamiento para usuarios que podría predecir si las cuentas nuevas también estaban conectadas al ISIS. Si bien el modelo podría atrapar a muchos extremistas, también vimos cómo el aprendizaje automático es susceptible de dos errores comunes: primero, el algoritmo puede mostrar falsos positivos al identificar erróneamente a alguien como un terrorista. En segundo lugar, puede mostrar falsos negativos, al errar la identificación de verdaderos terroristas.

Read More »

Tech innovators open the digital economy to job seekers, financially underserved – Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

From The Wall Street Journal

The future of work is a prime interest of the MIT Initiative on the Digital Economy, started in 2013 by researchers Erik Brynjolfsson and Andy McAfee. To help come up with answers to questions about the impact of automation on jobs and the effects of digital innovation, the group launched the MIT Inclusive Innovation Challenge last year, inviting organizations around the world to compete in the realm of improving the economic opportunities of middle- and base-level workers.

 More than $1 million in prizes went to winners of the 2017 competition in Boston last month in four categories: Job creation and income growth, skills development and matching, technology access, and financial inclusion. Awards were funded with support from Google.org, The Joyce Foundation, software firm ISN, and ISN President and CEO Joseph Eastin.

Read More »

Las puertas que abre la tecnología del blockchain – Christian Catalini and Cathy Barrera

MIT Sloan Professor Christian Catalini

MIT Sloan Professor Christian Catalini

From La Nacion

Si se otorgaran premios a la terminología comercial más de moda, sin duda la cadena de bloques o blockchain sería candidata. Después de todo, es una de las tecnologías más promocionadas en Silicon Valley y más allá.

Y, sin embargo, pese a tanto revuelo, la promesa y el potencial de la cadena de bloques -tecnología en la que se basan las criptomonedas, como el bitcoin- no se comprenden del todo. Hasta la fecha, solo se hicieron unos pocos estudios sobre el tema y, según una encuesta hecha el año pasado por Deloitte, casi el 40% de los altos ejecutivos afirma tener escaso o ningún conocimiento del modo en que funciona la cadena de bloques.

En un nivel básico, la tecnología de cadena de bloques permite que una red de computadoras llegue, a intervalos regulares, a un consenso sobre el estado verdadero de un registro descentralizado. Ese registro contiene diversos tipos de datos compartidos, como registros de transacciones, atributos de transacciones, credenciales u otra información. Read More »

Imagine If Robo Advisers Could Do Emotions– Andrew Lo

MIT Sloan Professor Andrew Lo

MIT Sloan Professor Andrew Lo

From the Wall Street Journal

At a conference last year, I was approached by an audience member after my talk. He thanked me for my observation that it’s unrealistic to expect investors to do nothing in the face of a sharp market-wide selloff, and that pulling out of the market can sometimes be the right thing to do. In fact, this savvy attendee converted all of his equity holdings to cash by the end of October 2008.

He then asked me for some advice: “Is it safe to get back in now?” Seven years after he moved his money into cash, he’s still waiting for just the right time to reinvest; meanwhile, the S&P 500 earned an annualized return of 14% during this period.

Investing is an emotional process. Managing these emotions is probably the greatest open challenge of financial technology. Investing is much more complicated than other chores like driving, which is why driverless cars are already more successful than even the best robo advisers.

Despite the enthusiasm of tech-savvy millennials—the generation of investors now in their 20s and 30s who are just as happy interacting with an app as with warm-blooded humans—robo advisers don’t take into account the limits of human cognition; they don’t make allowances for emotional reactions like fear and greed; and they can’t eliminate blind spots. Robo advisers don’t do emotion. When the stock market roils, investors freak out. They need comfort and encouragement. During last August’s stock-market rout, Vanguard Group told The Wall Street Journal it was “besieged” with calls from jittery investors and had to pull volunteers from across the company to handle the call volume.

But what if a robo adviser could identify the precise moment you freak out and encourage you not to sell by giving you historical context that calms your nerves? Better yet, what if this digital adviser could actively manage the risk of your portfolio so you don’t freak out at all?

Imagine if, like your car’s cruise control, you can set a level of risk that you’re comfortable with and your robo adviser will apply the brakes when you’re going downhill and step on the gas when you’re going uphill so as to maintain that level of risk. And if you do decide to temporarily take over by stepping on the brakes, the robo adviser will remind you from time to time that you need to step on the gas if you want to reach your destination in the time you’ve allotted. Instead of artificial intelligence, we should first conquer artificial emotion—by constructing algorithms that accurately capture human behavior, we can build countermeasures to protect us from ourselves.

Robo advisers have great potential but the technology is still immature; they’re the rotary phones to today’s iPhone.

Marvin Minsky, the recently deceased founding father of artificial intelligence, summarized the ultimate goal of his field by saying that he didn’t just want to build a computer that he could be proud of, he wanted to build a computer that could be proud of him. Wouldn’t it be grand if we built a robo adviser that could be proud of our portfolio?

See the post at  WSJ “The Experts” 

Andrew W. Lo is the Charles E. and Susan T. Harris Professor at MIT Sloan School of Management, director of the MIT Laboratory for Financial Engineering, principal investigator at MIT Computer Science and Artificial Intelligence Laboratory, and chief investment strategist at AlphaSimplex Group.