Is “murder by machine learning” the new “death by PowerPoint?” – Michael Schrage

MIT Center for Digital Business Research Fellow Michael Schrage

From Harvard Business Review 

Software doesn’t always end up being the productivity panacea that it promises to be. As its victims know all too well, “death by PowerPoint,” the poor use of the presentation software, sucks the life and energy out of far too many meetings. And audit after enterprise audit reveals spreadsheets rife with errors and macro miscalculations. Email and chat facilitate similar dysfunction; inbox overload demonstrably hurts managerial performance and morale. No surprises here — this is sadly a global reality that we’re all too familiar with.

So what makes artificial intelligence/machine learning (AI/ML) champions confident that their technologies will be immune to comparably counterproductive outcomes? They shouldn’t be so sure. Digital empowerment all too frequently leads to organizational mismanagement and abuse. The enterprise history of personal productivity tools offers plenty of unhappy litanies of unintended consequences. For too many managers, the technology’s costs often rival its benefits.

It’s precisely because machine learning and artificial intelligence platforms are supposed to be “smart” that they pose uniquely challenging organizational risks. They are likelier to inspire false and/or misplaced confidence in their findings; to amplify or further entrench data-based biases; and to reinforce — or even exacerbate — the very human flaws of the people who deploy them.

The problem is not that these innovative technologies don’t work; it’s that users will inadvertently make choices and take chances that undermine colleagues and customers. Ostensibly smarter software could perversely convert yesterday’s “death by Powerpoint” into tomorrow’s “murder by machine learning.” Nobody wants to produce boring presentations that waste everybody’s time, but they do; nobody wants to train machine learning algorithms that produce misleading predictions, but they will. The intelligent networks to counter-productivity hell are wired with good intentions. Read More »

The business of artificial intelligence – Erik Brynjolfsson and Andrew McAfee

Professor of Information Technology, Director, The MIT Initiative on the Digital Economy

Director of the MIT Initiative on the Digital Economy, Erik Brynjolfsson

Co-Director of the MIT Initiative on the Digital Economy, Andrew McAfee

From Harvard Business Review

For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalyzed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Read More »

How analytics and machine learning can aid organ transplant decisions – Dimitris Bertsimas and Nikolaos Trichakis

MIT Sloan Prof. Dimitris Bertsimas

MIT Sloan Asst. Prof. Nikolaos (Nikos) Trichakis

From Health Data Management

Imagine this scenario: A patient named John has waited 5.5 years for a much-needed kidney transplant. One day, he learns that a deceased donor kidney is available and that he is the 153rd patient to whom this kidney was offered.

Clearly, this is not a “high-quality” organ if it was declined by 152 patients or the clinicians treating them. But because John has been waiting a long time for a new kidney, should he accept or decline the kidney? And can analytics and machine learning help make that decision easier?

Currently, that decision is usually made by John’s doctor based on a variety of factors, such as John’s current overall health status on dialysis and a gut instinct about whether (and when) John will get a better offer for a healthier kidney.

If John is young and relatively healthy, the risk of prematurely accepting a lower-quality kidney is future organ failure and more surgeries. If John’s health status is critical and he rejects the kidney, he could be underestimating how long it will take until a higher-quality organ is available. The decision could be a matter of life or death.

John’s dilemma isn’t unique in the world of kidney transplantation, where current demand outpaces supply. Since 2002, the number of candidates on the waiting list has nearly doubled, from slightly more than 50,000 to more than 96,000 in 2013. During the same time, live donation rates have decreased. Complicating this problem of supply and demand is an unacceptably high deceased donor organ discard rate, as much as 50 percent in some instances.

Read More »

Artificial Intelligence Will Soon Shop For You, But Is That A Good Thing? – Renée Richardson Gosline

MIT Sloan Prof. Renée Richardson Gosline

From WBUR’s Cognoscenti

We’ve all had bad department store shopping experiences. The aggressively cheerful salesperson. The unforgiving glare of the dressing room. The overstuffed racks of garments where none of the sizes fit, and the ones that do, don’t come in your favorite color.

The advent of online shopping has helped consumers gain more control over their shopping experiences. But digital purchases are often a gamble, too. You scroll through endless webpages to find the perfect boots only to discover your size is on back order for two months. And the items you purchase frequently disappoint: The jacket that looked so elegant on the website’s model looks awkward on your frame.

Retail prognosticators claim that artificial intelligence and other new technologies will offer shoppers salvation. In the not-so-distant future, armies of robots using retina recognition software (à la “Minority Report”) will tailor their sales pitches to your preferences and price point. Voice-activated assistants and digital mannequins will help you to find just the right fit. Shopping from home will be a breeze too: Virtual reality headsets will allow you to “try on” clothes and sample items ranging from a tube of lipstick to a tennis racket. Two-day shipping? How antiquated. In the future, your package will arrive via drones in less than two hours. It may sound like science fiction but, in fact, many stores are testing these innovations and have plans to roll them out to customers.

Read More »

Good managers, not machines, drive productivity growth – John Van Reenen

MIT Sloan Professor John Van Reenen

From Bloomberg View

When people discuss what drives long-run productivity, they usually focus on technical change. But productivity is about more than robots, new drugs and self-driving vehicles. First, if you break down the sources of productivity across nations and firms there is a large residual left over (rather inelegantly named “Total Factor Productivity” or TFP for short). And observable measures of technology can only account for a small fraction of this dark matter.

On top of this, a huge number of statistical analyses and case studies of the impact of new technologies on firm performance have shown that there is a massive variation in its impact. What’s much more important than the amount spent on fancy tech is the way managerial practices are used in the firms that implement the changes.

Although there is a tradition in economics starting with the 19th-century American economist Francis Walker on the importance of management for productivity, it has been largely subterranean. Management is very hard to measure in a robust way, so economists have been happy to delegate this task to others in the case study literature in business schools.

Managers are more frequently the butt of jokes from TV shows like “The Office” to “Horrible Bosses,” than seen as drivers of growth. But maybe things are now changing.

Read More »