Robots won’t steal our jobs if we put workers at center of AI revolution – Thomas Kochan

MIT Sloan Professor Thomas Kochan

MIT Sloan Professor Thomas Kochan

From The Conversation

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Read More »

Seeing past the hype around cognitive computing – Jeanne Ross

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School's CISR

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School’s CISR

From Information Management

Given the hype around artificial intelligence, you might be worried that you’re missing the boat if you haven’t yet invested in cognitive computing applications in your business. Don’t panic! Consumer products, vehicles, and equipment with embedded intelligence are generating lots of excitement. However, business applications of AI are still in the early stages.

Research at MIT Sloan’s Center for Information Systems Research (CISR) suggests that small experiments in cognitive computing may help you tap the significant opportunities AI offers. But it’s easy to invest huge amounts of cash and time in failed experiments so you will want to carefully target your investments.

The biggest impact from cognitive computing applications is expected to come from automation of many existing jobs. We expect computers to do—faster and cheaper—many tasks now performed by humans. Progress thus far, however, suggests that we have significant obstacles to overcome in our efforts to replace human intelligence with computer intelligence. Despite some notable exceptions, we expect the displacement of human labor to proceed incrementally.

The business challenge is to determine which applications your company is ready to cash in on while resisting the lure of tackling processes that you can’t cost-effectively teach machines to do well. We have studied the opportunities and risks of business applications of cognitive computing and identified several lessons. These lessons offer suggestions for positioning your firm to capitalize on the potential benefits of cognitive computing and avoid the pitfalls.

Read More »

AI and the productivity paradox – Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

From The Wall Street Journal

Artificial intelligence is now applied to tasks that not long ago were viewed as the exclusive domain of humans, matching or surpassing human level performance. But, at the same time, productivity growth has significantly declined over the past decade, and income has continued to stagnate for the majority of Americans. This puzzling contradiction is addressed in “Artificial Intelligences and the Modern Productivity Paradox,” a working paper recently published by the National Bureau of Economic Research.

As the paper’s authors, MIT professor Erik Brynjolfsson, MIT PhD candidate Daniel Rock and University of Chicago professor Chad Syverson, note: “Aggregate labor productivity growth in the U.S. averaged only 1.3% per year from 2005 to 2016, less than half of the 2.8% annual growth rate sustained from 1995 to 2004… What’s more, real median income has stagnated since the late 1990s and non-economic measures of well-being, like life expectancy, have fallen for some groups.”

After considering four potential explanations, the NBER paper concluded that there’s actually no productivity paradox. Given the proper context, there are no inherent inconsistencies between having both transformative technological advances and lagging productivity. Over the past two centuries we’ve learned that there’s generally a significant time lag between the broad acceptance of new technology-based paradigms and the ensuing economic transformation and institutional recomposition. Even after reaching a tipping point of market acceptance, it takes considerable time, often decades, for the new technologies and business models to be widely embraced by companies and industries across the economy, and only then will their benefits follow, including productivity growth. The paper argues that we’re precisely in such an in-between period.

Let me briefly describe the four potential explanations explored in the paper: false hopes, mismeasurements, concentrated distribution, and implementation and restructuring lags.

Read More »

Artificial intelligence and the future of work – Thomas Kochan

MIT Sloan Professor Thomas Kochan

MIT Sloan Professor Thomas Kochan

From InfoTechnology

Artificial intelligence is quickly coming of age and there remain lingering questions about how we will manage this change.

AI will eliminate some jobs, there’s no question, but it will also create some new ones. So the first question we will face as business people, workers and citizens is about balance: are we going to create more jobs than we eliminate or not?

The second and much more fundamental question is: how are we going to proactively manage our AI investments so we can use AI to create new jobs or career opportunities for the future? And how will we make sure those jobs reach out to various sectors of our society increasing our overall wealth and well being and not overly increasing the inequities that already exist in our society.

I believe if we think about it strategically and if we engage more people in the design of AI systems, we’ll be able to make this transition successfully. It will require a proactive strategy. The American public and people all over the world have been shown the negative consequences of not being proactive—take global trade for example. The benefits of global trade have not been widely shared and we are now witnessing the effects of the anger and frustrations this has produced in the movement to more extreme politics and the deeper social divisions laid bare by recent events. We can’t make the same mistake about the future developments of technology.

Read More »

The business of artificial intelligence – Erik Brynjolfsson and Andrew McAfee

Professor of Information Technology, Director, The MIT Initiative on the Digital Economy

Director of the MIT Initiative on the Digital Economy, Erik Brynjolfsson

Co-Director of the MIT Initiative on the Digital Economy, Andrew McAfee

From Harvard Business Review

For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalyzed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Read More »