MIT Sloan Visiting Lecturer Irving Wladawsky-Berger
Episode 72 of Voices in AI features host Byron Reese and Irving Wladawsky-Berger discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us. Irving has a PhD in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise.
Here is the podcast transcript:
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Irving Wladawsky-Berger. He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management. He is a guest columnist for the Wall Street Journaland CIO Journal. He is an adjunct professor of the Imperial College of London. He is a fellow for the Center for Global Enterprise, and I think a whole lot more things. Welcome to the show, Irving.
Irving Wladawsky-Berger: Byron it’s a pleasure to be here with you.
So, that’s a lot of things you do. What do you spend most of your time doing?
Well, I spend most of my time these days either in MIT-oriented activities or writing my weekly columns, [which] take quite a bit of time. So, those two are a combination, and then, of course, doing activities like this – talking to you about AI and related topics.
MIT Sloan Professor Thomas Kochan
From The Conversation
The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.
While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.
Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”
The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.
MIT Sloan Visiting Lecturer Irving Wladawsky-Berger
From The Wall Street Journal
Artificial intelligence is now applied to tasks that not long ago were viewed as the exclusive domain of humans, matching or surpassing human level performance. But, at the same time, productivity growth has significantly declined over the past decade, and income has continued to stagnate for the majority of Americans. This puzzling contradiction is addressed in “Artificial Intelligences and the Modern Productivity Paradox,” a working paper recently published by the National Bureau of Economic Research.
As the paper’s authors, MIT professor Erik Brynjolfsson, MIT PhD candidate Daniel Rock and University of Chicago professor Chad Syverson, note: “Aggregate labor productivity growth in the U.S. averaged only 1.3% per year from 2005 to 2016, less than half of the 2.8% annual growth rate sustained from 1995 to 2004… What’s more, real median income has stagnated since the late 1990s and non-economic measures of well-being, like life expectancy, have fallen for some groups.”
After considering four potential explanations, the NBER paper concluded that there’s actually no productivity paradox. Given the proper context, there are no inherent inconsistencies between having both transformative technological advances and lagging productivity. Over the past two centuries we’ve learned that there’s generally a significant time lag between the broad acceptance of new technology-based paradigms and the ensuing economic transformation and institutional recomposition. Even after reaching a tipping point of market acceptance, it takes considerable time, often decades, for the new technologies and business models to be widely embraced by companies and industries across the economy, and only then will their benefits follow, including productivity growth. The paper argues that we’re precisely in such an in-between period.
Let me briefly describe the four potential explanations explored in the paper: false hopes, mismeasurements, concentrated distribution, and implementation and restructuring lags.
Director of the MIT Initiative on the Digital Economy, Erik Brynjolfsson
Co-Director of the MIT Initiative on the Digital Economy, Andrew McAfee
From Harvard Business Review
For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalyzed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.
The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.
Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.
MIT Sloan Ph.D. Student Matt Beane
In the popular media, we talk a lot about robots stealing jobs. But when we stop speculating and actually look at the real world of work, the impact of advanced robotics is far more nuanced and complicated. Issues of jobs and income inequality fade away, for example — there aren’t remotely enough robots to affect more than a handful of us in the practical sense.
Yet robots usually spell massive changes in the way that skilled work gets done: The work required to fly an F-16 in a combat zone is radically different from the work required to fly a Reaper, a semi-autonomous unmanned aerial vehicle, in that same zone.
Because they change the work so radically, robot-linked upheavals like this create a challenge: How do you train the next generation of professionals who will be working with robots?
My research into the increasing use of robotics in surgery offers a partial answer. But it has also uncovered trends that — if they continue — could have a major impact on surgical training and, as a result, the quality of future surgeries.