Robots might not take your job — but they will probably make it boring – Matt Beane

Matt Beane
Research Affiliate in
Management Science

From Wired

Whether they believe robots are going to create or destroy jobs, most experts say that robots are particularly useful for handling “dirty, dangerous and dull” work. They point to jobs like shutting down a leaky nuclear reactor, cleaning sewers, or inspecting electronic components to really drive the point home. Robots don’t get offended, they are cheap to repair when they get “hurt,” and they don’t get bored. It’s hard to disagree: What could possibly be wrong about automating jobs that are disgusting, mangle people, or make them act like robots?

The problem is that installing robots often makes the jobs around them worse. Use a robot for aerial reconnaissance, and remote pilots end up bored. Use a robot for surgery, and surgical trainees end up watching, not learning. Use a robot to transport materials, and workers that handle those materials can no longer interact with and learn from their customers. Use a robot to farm, and farmers end up barred from repairing their own tractors.

I know this firsthand: For most of the last seven years, I have been studying these dynamics in the field. I spent over two years looking at robotic surgery in top-tier hospitals around the US, and at every single one of them, most nurses and surgical assistants were bored out of their skulls.

Read More »

Robots won’t steal our jobs if we put workers at center of AI revolution – Thomas Kochan

MIT Sloan Professor Thomas Kochan

MIT Sloan Professor Thomas Kochan

From The Conversation

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Read More »

In an era of tech innovation, whispers of declining research productivity – Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

From The Wall Street Journal

Given the pace of technological change, we tend to think of our age as the most innovative ever. But over the past several years, a number of economists have argued that increasing R&D efforts are yielding decreasing returns.

Are Ideas Getting Harder to Find?, a recent paper by economists Nicholas Bloom, Charles Jones and Michael Webb from Stanford and John Van Reenen from MIT, shows that, across a wide range of industries, research efforts are rising substantially while research productivity is declining sharply.

Moore’s Law, the empirical observation that the number of transistors in a computer chip doubles approximately every two years, illustrates these trends. The paper points out that the number of researchers required to double chip density today is 18 times larger than those required in the early 1970s. In the case of Moore’s Law, research productivity has been declining at a rate of about 6.8% per year.

The authors conducted a similar in-depth analysis in the agricultural and pharmaceutical industries. For agricultural yields, research effort went up by a factor of two between 1970 and 2007, while research productivity declined by a factor of 4 over the same period, at an annual rate of 3.7 %. For pharmaceuticals, research efforts went up by a factor of 9 between 1970 and 2014 while research productivity declined by a factor of 5, an annual rate of 3.5%.


Read More »

Seeing past the hype around cognitive computing – Jeanne Ross

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School's CISR

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School’s CISR

From Information Management

Given the hype around artificial intelligence, you might be worried that you’re missing the boat if you haven’t yet invested in cognitive computing applications in your business. Don’t panic! Consumer products, vehicles, and equipment with embedded intelligence are generating lots of excitement. However, business applications of AI are still in the early stages.

Research at MIT Sloan’s Center for Information Systems Research (CISR) suggests that small experiments in cognitive computing may help you tap the significant opportunities AI offers. But it’s easy to invest huge amounts of cash and time in failed experiments so you will want to carefully target your investments.

The biggest impact from cognitive computing applications is expected to come from automation of many existing jobs. We expect computers to do—faster and cheaper—many tasks now performed by humans. Progress thus far, however, suggests that we have significant obstacles to overcome in our efforts to replace human intelligence with computer intelligence. Despite some notable exceptions, we expect the displacement of human labor to proceed incrementally.

The business challenge is to determine which applications your company is ready to cash in on while resisting the lure of tackling processes that you can’t cost-effectively teach machines to do well. We have studied the opportunities and risks of business applications of cognitive computing and identified several lessons. These lessons offer suggestions for positioning your firm to capitalize on the potential benefits of cognitive computing and avoid the pitfalls.

Read More »

How companies can create a cybersafe culture at work – Stuart Madnick

Stuart Madnick, MIT Sloan Prof. of Information Technology

From The Wall Street Journal

As technical defenses against cyberattacks have improved, attackers have adapted by zeroing in on the weakest link: people. And too many companies are making it easy for the attackers to succeed.

An analogy that I often use is this: You can get a stronger lock for your door, but if you are still leaving the key under your mat, are you really any more secure?

It isn’t as if people aren’t aware of the weapons hackers are using. For instance, most people have heard of, and probably experienced, phishing—emails or messages asking you to take some action. (“We are your IT dept. and want to help you protect your computer. Click on this link for more information.”) Although crude, these tactics still achieve a 1% to 3% success rate.

Then there are the more deadly, personalized “spearphish” attacks. One example is an email, apparently sent from a CEO to the CFO, that starts by mentioning things they discussed at dinner last week and requests that money be transferred immediately for a new high-priority project. These attacks are increasingly popular because they have a high success rate.

The common element of all these kinds of attacks: They rely on people falling for them. Read More »