AI and the productivity paradox – Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

From The Wall Street Journal

Artificial intelligence is now applied to tasks that not long ago were viewed as the exclusive domain of humans, matching or surpassing human level performance. But, at the same time, productivity growth has significantly declined over the past decade, and income has continued to stagnate for the majority of Americans. This puzzling contradiction is addressed in “Artificial Intelligences and the Modern Productivity Paradox,” a working paper recently published by the National Bureau of Economic Research.

As the paper’s authors, MIT professor Erik Brynjolfsson, MIT PhD candidate Daniel Rock and University of Chicago professor Chad Syverson, note: “Aggregate labor productivity growth in the U.S. averaged only 1.3% per year from 2005 to 2016, less than half of the 2.8% annual growth rate sustained from 1995 to 2004… What’s more, real median income has stagnated since the late 1990s and non-economic measures of well-being, like life expectancy, have fallen for some groups.”

After considering four potential explanations, the NBER paper concluded that there’s actually no productivity paradox. Given the proper context, there are no inherent inconsistencies between having both transformative technological advances and lagging productivity. Over the past two centuries we’ve learned that there’s generally a significant time lag between the broad acceptance of new technology-based paradigms and the ensuing economic transformation and institutional recomposition. Even after reaching a tipping point of market acceptance, it takes considerable time, often decades, for the new technologies and business models to be widely embraced by companies and industries across the economy, and only then will their benefits follow, including productivity growth. The paper argues that we’re precisely in such an in-between period.

Let me briefly describe the four potential explanations explored in the paper: false hopes, mismeasurements, concentrated distribution, and implementation and restructuring lags.

Read More »

The business of artificial intelligence – Erik Brynjolfsson and Andrew McAfee

Professor of Information Technology, Director, The MIT Initiative on the Digital Economy

Director of the MIT Initiative on the Digital Economy, Erik Brynjolfsson

Co-Director of the MIT Initiative on the Digital Economy, Andrew McAfee

From Harvard Business Review

For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalyzed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Read More »

Robots add real value when working with humans, not replacing them — Matt Beane

MIT Sloan Ph.D. Student Matt Beane

MIT Sloan Ph.D. Student Matt Beane

From TechCrunch

In the popular media, we talk a lot about robots stealing jobs. But when we stop speculating and actually look at the real world of work, the impact of advanced robotics is far more nuanced and complicated. Issues of jobs and income inequality fade away, for example — there aren’t remotely enough robots to affect more than a handful of us in the practical sense.

Yet robots usually spell massive changes in the way that skilled work gets done: The work required to fly an F-16 in a combat zone is radically different from the work required to fly a Reaper, a semi-autonomous unmanned aerial vehicle, in that same zone.

Because they change the work so radically, robot-linked upheavals like this create a challenge: How do you train the next generation of professionals who will be working with robots

My research into the increasing use of robotics in surgery offers a partial answer. But it has also uncovered trends that — if they continue — could have a major impact on surgical training and, as a result, the quality of future surgeries.

Read More »

The jobs that AI can’t replace — Erik Brynjolfsson and Andrew McAfee

Erik Brynjolfsson and Andrew McAfee

MIT Sloan’s Erik Brynjolfsson and Andrew McAfee

From BBC

Current advances in robots and other digital technologies are stirring up anxiety among workers and in the media. There is a great deal of fear, for example, that robots will not only destroy existing jobs, but also be better at most or all of the tasks required in the future.

Our research at the Massachusetts Institute of Technology (MIT) has shown that that’s at best a half-truth. While it is true that robots are getting very good at a whole bunch of jobs and tasks, there are still many categories in which humans perform better.

And, perhaps more importantly, robots and other forms of automation can aid in the creation of new and better jobs for humans. As a result, while we do expect that some jobs will disappear, other jobs will be created and some existing jobs will become more valuable.

For example, machines are currently dominating the jobs in routine information processing. “Computer,” after all, used to be an actual job title of a person who sat and added long rows of numbers. Now it is, well, an actual computer.

On the other hand, jobs such as data scientist didn’t used to exist, but because computers have made enormous data sets analyzable, we now have new jobs for people to interpret these huge pools of information. In the tumult of our economy, even as old tasks get automated away, along with demand for their corresponding skills, the economy continues to create new jobs and industries.

Read the full post at the BBC.

The authors also appeared on the BBC’s “Panorama” for a segment titled “Could A Robot Do My Job.”  See the program here.

Erik Brynjolfsson is the Schussel Family Professor of Management Science, a Professor of Information Technology, and the Director of the MIT Initiative on the Digital Economy at the MIT Sloan School of Management. 

Andrew McAfee is the Principal Research Scientist at the MIT Center for Digital Business.