How do we learn to work with intelligent machines? – Matt Beane

The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, we’re handling AI in a way that blocks that path — and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. What can be done? Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AI’s amazing capabilities while enhancing our skills at the same time.

 

Matt Beane is a Research Affiliate with MIT’s Institute for the Digital Economy.

How AI-human superminds will save jobs – Thomas Malone

Thomas W. Malone is the Patrick J. McGovern (1959) Professor of Management, a Professor of Information Technology

From Management Today 

We often overestimate the potential for AI because it’s easy to imagine computers as smart as people. Science fiction is full of them. But it’s much harder to create such machines than to imagine them.

All of today’s most advanced AI programs are only capable of specialised intelligence —doing particular tasks like recognising faces, playing Jeopardy, or driving cars. But any normal human five-year old has far more general intelligence — the ability to learn and do many different tasks — than even the most advanced computers today. Experts on average predict that human-level artificial general intelligence is about 20 years in the future, but that’s what they’ve been predicting for the last 60 years.

On the other hand, we often underestimate the potential for using computers to provide hyperconnectivity — connecting people to other people (and machines) at massive scales and in rich new ways. In fact, it’s probably easier to create massively connected groups of people and computers (like the Internet and social networks) than to imagine what these ‘superminds’ will actually do.

Superminds – such as hierarchies, markets and communities – are composed of people and computers doing things together that neither can do alone. For example, superminds use machines to do complex calculations but people to decide which programmes to run in the first place and what to do when things go wrong.

Read More »

Robots won’t steal our jobs if we put workers at center of AI revolution – Thomas Kochan

MIT Sloan Professor Thomas Kochan

MIT Sloan Professor Thomas Kochan

From The Conversation

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Read More »

Seeing past the hype around cognitive computing – Jeanne Ross

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School's CISR

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School’s CISR

From Information Management

Given the hype around artificial intelligence, you might be worried that you’re missing the boat if you haven’t yet invested in cognitive computing applications in your business. Don’t panic! Consumer products, vehicles, and equipment with embedded intelligence are generating lots of excitement. However, business applications of AI are still in the early stages.

Research at MIT Sloan’s Center for Information Systems Research (CISR) suggests that small experiments in cognitive computing may help you tap the significant opportunities AI offers. But it’s easy to invest huge amounts of cash and time in failed experiments so you will want to carefully target your investments.

The biggest impact from cognitive computing applications is expected to come from automation of many existing jobs. We expect computers to do—faster and cheaper—many tasks now performed by humans. Progress thus far, however, suggests that we have significant obstacles to overcome in our efforts to replace human intelligence with computer intelligence. Despite some notable exceptions, we expect the displacement of human labor to proceed incrementally.

The business challenge is to determine which applications your company is ready to cash in on while resisting the lure of tackling processes that you can’t cost-effectively teach machines to do well. We have studied the opportunities and risks of business applications of cognitive computing and identified several lessons. These lessons offer suggestions for positioning your firm to capitalize on the potential benefits of cognitive computing and avoid the pitfalls.

Read More »

AI and the productivity paradox – Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

From The Wall Street Journal

Artificial intelligence is now applied to tasks that not long ago were viewed as the exclusive domain of humans, matching or surpassing human level performance. But, at the same time, productivity growth has significantly declined over the past decade, and income has continued to stagnate for the majority of Americans. This puzzling contradiction is addressed in “Artificial Intelligences and the Modern Productivity Paradox,” a working paper recently published by the National Bureau of Economic Research.

As the paper’s authors, MIT professor Erik Brynjolfsson, MIT PhD candidate Daniel Rock and University of Chicago professor Chad Syverson, note: “Aggregate labor productivity growth in the U.S. averaged only 1.3% per year from 2005 to 2016, less than half of the 2.8% annual growth rate sustained from 1995 to 2004… What’s more, real median income has stagnated since the late 1990s and non-economic measures of well-being, like life expectancy, have fallen for some groups.”

After considering four potential explanations, the NBER paper concluded that there’s actually no productivity paradox. Given the proper context, there are no inherent inconsistencies between having both transformative technological advances and lagging productivity. Over the past two centuries we’ve learned that there’s generally a significant time lag between the broad acceptance of new technology-based paradigms and the ensuing economic transformation and institutional recomposition. Even after reaching a tipping point of market acceptance, it takes considerable time, often decades, for the new technologies and business models to be widely embraced by companies and industries across the economy, and only then will their benefits follow, including productivity growth. The paper argues that we’re precisely in such an in-between period.

Let me briefly describe the four potential explanations explored in the paper: false hopes, mismeasurements, concentrated distribution, and implementation and restructuring lags.

Read More »