What can a restaurant teach us about innovation? – Pilar Opazo

Pilar Opazo, Lecturer, Work and Organization Studies

From Modern Restaurant Management

When Chef Ferran Adrià shuttered his famed elBulli restaurant in 2011, foodie circles were stunned. elBulli was at the peak of its fame: it had three Michelin stars and a waiting list of two million diners. Adrià—widely considered one of the most imaginative culinary minds of the world—operated in an elite class of chefs. He kept his restaurant open just six months a year and served one meal a day, never offering the same dish twice.

Rumors circulated that the closure was due to a family feud or money problems. But the truth was that Adrià was petrified of repeating himself. (“Can you imagine this pressure?” he told The New York Times. “You cannot.”)

In 2014 Adrià reopened elBulli not as a restaurant, but as a foundation dedicated to studying and understanding the nature of creativity. It’s a subject in which Adrià has passionate expertise. When he arrived at elBulli in the early 80s, it was a French restaurant. By the 1990s Adrià was head chef and elBulli was transformed as a test kitchen for gastronomic invention.

But while he became known for dreaming up dishes like Escoffier’s classic peach melba and smoke foam, Adrià was engaged in a far more ambitious project—achieving and sustaining a culture of innovation. He and his team established a set of best practices for organizational creativity and systematic invention. The result: processes and structures that are applicable not just to restaurants but other organizations as well. Here are some elements of elBulli’s, ahem, secret sauce: Read More »

American Workers’ Labor Day Message: Restore our Voice at Work! – Thomas A. Kochan, Erin L. Kelly, William Kimball, and Duanyi Yang

MIT Sloan Professor Thomas Kochan

MIT Sloan Professor Thomas Kochan

From The Conversation.

When earlier this year courageous teachers in West Virginia, Kentucky, Oklahoma, Colorado, and Arizona marched on their state capitals to get a pay raise and better funding for their students, they spoke for the majority of American workers who lack an effective voice at work. Their actions should serve as a wake-up call for employers and politicians alike: It is time to restore our voice at work.

Teachers are not alone in demanding a change.  A recent national survey of the workforce we conducted found there is a persistent and deep gap between the influence and say American workers believe they ought to have at work—

MIT Sloan Distinguished Professor of Work and Organization Studies Erin Kelly

something we call worker voice–and what they experience.  A majority of workers report they have less say than they believe they should have over key issues such as compensation and benefits, job security, promotions, training, new technology, employer values, respect, and protections against abuse and discrimination.  And between a third and one half report a voice gap on decisions about how and when they work, safety, and the quality of their products or services.

The long term decline in unions is a key reason for this voice gap and many workers see reversing this decline as part of the solution. In the same survey nearly 50 percent of the workforce (equivalent to 58 million workers) report they would join a union if given the chance to do so today, a number that is up from one third of the workforce in prior decades.

But rebuilding worker voice in ways that work in today’s economy and for the full range of workers who want more of a voice will require new strategies on the part of unions and other worker advocates and an entirely new labor law.

Read More »

Is “murder by machine learning” the new “death by PowerPoint?” – Michael Schrage

MIT Center for Digital Business Research Fellow Michael Schrage

From Harvard Business Review 

Software doesn’t always end up being the productivity panacea that it promises to be. As its victims know all too well, “death by PowerPoint,” the poor use of the presentation software, sucks the life and energy out of far too many meetings. And audit after enterprise audit reveals spreadsheets rife with errors and macro miscalculations. Email and chat facilitate similar dysfunction; inbox overload demonstrably hurts managerial performance and morale. No surprises here — this is sadly a global reality that we’re all too familiar with.

So what makes artificial intelligence/machine learning (AI/ML) champions confident that their technologies will be immune to comparably counterproductive outcomes? They shouldn’t be so sure. Digital empowerment all too frequently leads to organizational mismanagement and abuse. The enterprise history of personal productivity tools offers plenty of unhappy litanies of unintended consequences. For too many managers, the technology’s costs often rival its benefits.

It’s precisely because machine learning and artificial intelligence platforms are supposed to be “smart” that they pose uniquely challenging organizational risks. They are likelier to inspire false and/or misplaced confidence in their findings; to amplify or further entrench data-based biases; and to reinforce — or even exacerbate — the very human flaws of the people who deploy them.

The problem is not that these innovative technologies don’t work; it’s that users will inadvertently make choices and take chances that undermine colleagues and customers. Ostensibly smarter software could perversely convert yesterday’s “death by Powerpoint” into tomorrow’s “murder by machine learning.” Nobody wants to produce boring presentations that waste everybody’s time, but they do; nobody wants to train machine learning algorithms that produce misleading predictions, but they will. The intelligent networks to counter-productivity hell are wired with good intentions. Read More »

Six months isn’t long term – Robert Pozen

MIT Sloan Senior Lecturer Robert Pozen

Robert Pozen, Senior Lecturer, MIT Sloan School of Management

From Wall Street Journal

President Trump tweeted on Friday that he had directed the Securities and Exchange Commission to study a suggestion from a business leader, later revealed as outgoing Pepsi CEO Indra Nooyi: “Stop quarterly reporting & go to a six month system.” The popular theory is that quarterly reporting discourages firms from making long-term investments.

But switching to semiannual reporting wouldn’t help. Find us CEOs with stockpiles of good, long-term projects that they are not pursuing—but that they would, if only they had three extra months to report earnings. Reporting every six months is nobody’s definition of “long term.” Besides, investors have waited patiently as Amazon, Netflix and many biotech firms have followed long-term strategies.

In 2007, financial reporting in the United Kingdom moved from semiannual to quarterly. Yet capital expenditures and research-and-development spending didn’t fall significantly over the next three to six years, according to a study from the CFA Institute Research Foundation. When the quarterly requirement was ended in 2014, investment by U.K. companies didn’t change.

Read More »

La seguridad con la IA – Tauhid Zaman

Tauhid Zaman, Associate Professor, Operations Mangement

From Computerworld Colombia 

¿ Creen que sería genial si un computador lograra identificar un criminal antes de cometer un crimen? Ese es el objetivo del Machine Learning, que está convirtiéndose en una herramienta popular en la prevención del crimen.

Por medio del análisis de datos como edad, género y las condenas anteriores, los computadores pueden predecir si es probable que alguien cometa un delito. Si usted es un juez y está decidiendo si conceder una fianza o enviar a alguien a la cárcel, esa información puede ser bastante útil. El problema es que el aprendizaje automático también puede ser extremadamente peligroso porque, si se confía por completo en él, puede mantener a una persona inocente tras las rejas.

En un estudio reciente, analizamos si el aprendizaje automático podría aplicarse para identificar terroristas en las redes sociales. Utilizando datos de varios cientos de miles de cuentas de extremistas en Twitter, desarrollamos un modelo de comportamiento para usuarios que podría predecir si las cuentas nuevas también estaban conectadas al ISIS. Si bien el modelo podría atrapar a muchos extremistas, también vimos cómo el aprendizaje automático es susceptible de dos errores comunes: primero, el algoritmo puede mostrar falsos positivos al identificar erróneamente a alguien como un terrorista. En segundo lugar, puede mostrar falsos negativos, al errar la identificación de verdaderos terroristas.

Read More »