Getting serious about the human side of data – Tom Davenport, Randy Bean

Fellow, MIT Center for Digital Business, Tom Davenport

From Forbes

NewVantage Partners just released its 7th annual executive survey on big data and artificial intelligence in large organizations. If you’re pulling for better data, analytics, and AI within companies, there is much to encourage you in this year’s survey. Many aspects of this important domain of business show improvement:

  • There was a higher participation rate in the survey than ever before, suggesting that more executives believe the topic is important.
  • 90% of those who completed the survey are “C-level” executives—chief data, analytics, or information officers. A decade ago, only one of these jobs (the CIO) even existed.
  • 92% of the respondents are increasing their pace of investment in big data and AI.
  • 62% have already seen measurable results from their investments in big data and AI (a bit less than in 2018, but still pretty good).
  • 48% say their organization competes on data and analytics. When Tom introduced this concept in a 2006 HBR article, perhaps 5% of large organizations would have said they did so.

Read More »

The rise of global, superstar firms, sectors and cities – Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

MIT Sloan Visiting Lecturer Irving Wladawsky-Berger

From The Wall Street Journal

In the 1990s, the internet was supposed to usher a much more open, decentralized, democratic economy and society. Startups with innovative business models were now able to reach customers anywhere, anytime. Companies, from the largest to the smallest, could now transact with anyone around the world. Vertically integrated firms became virtual enterprises, increasingly relying on supply chain partners for many of the functions once done in-house. Experts noted that large firms were no longer necessary and would in fact be at a disadvantage in the emerging digital economy when competing against agile, innovative smaller companies.

Some even predicted that the internet would lead to the decline of cities. People could now work and shop from home; be in touch with their friends over e-mail, video calls, and text messaging; and get access to all kinds of information and entertainment online. Why would anyone choose to live in a crowded, expensive, crime-prone urban area when they could lead a more relaxing, affordable life in an outer suburb or small town?

But, as we well know, it hasn’t quite worked out as expected. Instead, we’ve seen the rise of the global superstar company, the unicorn startup, and the winner-take-all city. As it’s turned out, the internet’s universal reach and connectivity has led to increasingly powerful network effects and to the rise of platform economies.

Read More »

Why hypotheses beat goals – Jeanne Ross

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School's CISR

Jeanne Ross, Director & Principal Research Scientist at the MIT Sloan School’s CISR

From MIT Sloan Management Review 

Not long ago, it became fashionable to embrace failure as a sign of a company’s willingness to take risks. This trend lost favor as executives recognized that what they wanted was learning, not necessarily failure. Every failure can be attributed to a raft of missteps, and many failures do not automatically contribute to future success.

Certainly, if companies want to aggressively pursue learning, they must accept that failures will happen. But the practice of simply setting goals and then being nonchalant if they fail is inadequate.

Instead, companies should focus organizational energy on hypothesis generation and testing. Hypotheses force individuals to articulate in advance why they believe a given course of action will succeed. A failure then exposes an incorrect hypothesis — which can more reliably convert into organizational learning.

What Exactly Is a Hypothesis?

When my son was in second grade, his teacher regularly introduced topics by asking students to state some initial assumptions. For example, she introduced a unit on whales by asking: How big is a blue whale? The students all knew blue whales were big, but how big? Guesses ranged from the size of the classroom to the size of two elephants to the length of all the students in class lined up in a row. Students then set out to measure the classroom and the length of the row they formed, and they looked up the size of an elephant. They compared their results with the measurements of the whale and learned how close their estimates were.

Note that in this example, there is much more going on than just learning the size of a whale. Students were learning to recognize assumptions, make intelligent guesses based on those assumptions, determine how to test the accuracy of their guesses, and then assess the results.

This is the essence of hypothesis generation. A hypothesis emerges from a set of underlying assumptions. It is an articulation of how those assumptions are expected to play out in a given context. In short, a hypothesis is an intelligent, articulated guess that is the basis for taking action and assessing outcomes.

Hypothesis generation in companies becomes powerful if people are forced to articulate and justify their assumptions. It makes the path from hypothesis to expected outcomes clear enough that, should the anticipated outcomes fail to materialize, people will agree that the hypothesis was faulty.

Building a culture of effective hypothesizing can lead to more thoughtful actions and a better understanding of outcomes. Not only will failures be more likely to lead to future successes, but successes will foster future successes.

Why Is Hypothesis Generation Important?

Digital technologies are creating new business opportunities, but as I’ve noted in earlier columns, companies must experiment to learn both what is possible and what customers want. Most companies are relying on empowered, agile teams to conduct these experiments. That’s because teams can rapidly hypothesize, test, and learn.

Hypothesis generation contrasts starkly with more traditional management approaches designed for process optimization. Process optimization involves telling employees both what to do and how to do it. Process optimization is fine for stable business processes that have been standardized for consistency. (Standardized processes can usually be automated, specifically because they are stable.) Increasingly, however, companies need their people to steer efforts that involve uncertainty and change. That’s when organizational learning and hypothesis generation are particularly important.

Shifting to a culture that encourages empowered teams to hypothesize isn’t easy. Established hierarchies have developed managers accustomed to directing employees on how to accomplish their objectives. Those managers invariably rose to power by being the smartest person in the room. Such managers can struggle with the requirements for leading empowered teams. They may recognize the need to hold teams accountable for outcomes rather than specific tasks, but they may not be clear about how to guide team efforts.

Read the full post at MIT Sloan Management Review.

Jeanne W. Ross conducts academic research that targets the challenges of senior level executives at CISR’s nearly 100 global sponsor companies.

Communicators with Sinan Aral

Sinan Aral, David Austin Professor of Management, MIT Sloan School of Management

From C-SPAN

MIT Professor Sinan Aral talked about proposals to measure disinformation on social media in order to safeguard elections and democracy.

Watch the video at C-SPAN. 

Sinan Aral is the David Austin Professor of Management at MIT, where he is a Professor of IT & Marketing, and Professor in the Institute for Data, Systems and Society where he co-leads MIT’s Initiative on the Digital Economy.

 

Artificial intelligence in modern cybersecurity operations – George Wrenn

George Wrenn, Research Affiliate, Management Science

From Security Now 

AI has been around as a sub-field of computer science since the 1950s, but has undergone many “fits and starts.” Different factions have over time Balkanized the definition of AI as well as the applications of the concept. This explains the “AI winters” that have at times derailed the field and distracted focus to other computer science fields such as robotics and machine vision. Fortunately, the field has matured enough to cease these arguments and delays in practical use. However ethical debates still rage as AI has a historical backdrop that is rooted in “machines simulating or replacing human thought and actions.”

Here’s how Wikipedia defines AI:

Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. More specifically, Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” Colloquially, the term “artificial intelligence” is used to describe machines that mimic “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving.”

Read More »