Is “murder by machine learning” the new “death by PowerPoint?” – Michael Schrage

MIT Center for Digital Business Research Fellow Michael Schrage

From Harvard Business Review 

Software doesn’t always end up being the productivity panacea that it promises to be. As its victims know all too well, “death by PowerPoint,” the poor use of the presentation software, sucks the life and energy out of far too many meetings. And audit after enterprise audit reveals spreadsheets rife with errors and macro miscalculations. Email and chat facilitate similar dysfunction; inbox overload demonstrably hurts managerial performance and morale. No surprises here — this is sadly a global reality that we’re all too familiar with.

So what makes artificial intelligence/machine learning (AI/ML) champions confident that their technologies will be immune to comparably counterproductive outcomes? They shouldn’t be so sure. Digital empowerment all too frequently leads to organizational mismanagement and abuse. The enterprise history of personal productivity tools offers plenty of unhappy litanies of unintended consequences. For too many managers, the technology’s costs often rival its benefits.

It’s precisely because machine learning and artificial intelligence platforms are supposed to be “smart” that they pose uniquely challenging organizational risks. They are likelier to inspire false and/or misplaced confidence in their findings; to amplify or further entrench data-based biases; and to reinforce — or even exacerbate — the very human flaws of the people who deploy them.

The problem is not that these innovative technologies don’t work; it’s that users will inadvertently make choices and take chances that undermine colleagues and customers. Ostensibly smarter software could perversely convert yesterday’s “death by Powerpoint” into tomorrow’s “murder by machine learning.” Nobody wants to produce boring presentations that waste everybody’s time, but they do; nobody wants to train machine learning algorithms that produce misleading predictions, but they will. The intelligent networks to counter-productivity hell are wired with good intentions.

For example, as Gideon Mann and Cathy O’Neil astutely observe in “Hiring Algorithms Are Not Neutral,” their HBR article, “Man-made algorithms are fallible and may inadvertently reinforce discrimination in hiring practices. Any HR manager using such a system needs to be aware of its limitations and have a plan for dealing with them…. Algorithms are, in part, our opinions embedded in code. They reflect human biases and prejudices that lead to machine learning mistakes and misinterpretations.”

These intrinsic biases — in data sets and algorithms alike — can be found wherever important data-driven decisions need to be made, such as customer segmentation efforts, product feature designs, and project risk assessments. There may even be biases in detecting biases. In other words, there’s no escaping the reality that machine learning’s computational strengths inherently coexist with human beings’ cognitive weaknesses, and vice versa. But that’s more a leadership challenge than a technical issue. The harder question is: Who’s going to “own” this digital coevolution of talent and technology, and sustainably steer it to success?

To answer this question, consider the two modes of AI/ML that are most likely to dominate enterprise initiatives:

  • Active AI/ML means people directly determine the role of artificial intelligence or machine learning to get the job done. The humans are in charge; they tell the machines what to do. People rule.
  • Passive AI/ML, by contrast, means the algorithms largely determine people’s parameters and processes for getting the job done. The software is in charge; the machines tell the humans what to do. Machines rule.

Read the full post at Harvard Business Review.

Michael Schrage is a research fellow at the MIT Sloan School’s Center for Digital Business.

Leave a Reply

Your email address will not be published. Required fields are marked *