George Wrenn, Research Affiliate, Management Science
From Security Now
AI has been around as a sub-field of computer science since the 1950s, but has undergone many “fits and starts.” Different factions have over time Balkanized the definition of AI as well as the applications of the concept. This explains the “AI winters” that have at times derailed the field and distracted focus to other computer science fields such as robotics and machine vision. Fortunately, the field has matured enough to cease these arguments and delays in practical use. However ethical debates still rage as AI has a historical backdrop that is rooted in “machines simulating or replacing human thought and actions.”
Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. More specifically, Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” Colloquially, the term “artificial intelligence” is used to describe machines that mimic “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving.”
Christos Makridis, digital fellow at the MIT Sloan Initiative on the Digital Economy
From The Hill
The recent spotlight on homelessness and poverty in Baltimore, Los Angeles and other major cities highlights a growing challenge in urban America: the rising cost of living.
The economy is booming by nearly all accounts. Year-to-year real GDP growth has been at least 2 percent since President Trump was elected, the unemployment rate is at its lowest point since 1969 and year-to-year nominal wages are growing faster than they have since the 2008-09 Great Recession. But a handful of metropolitan areas are experiencing growing rates of homelessness and labor market exits.
For example, California’s population growth in 2018 was the slowest in recorded history. And, while the overall number of homeless people is at its lowest point since 2007, according to the latest statistics from the U.S. Department of Housing and Urban Development (HUD), the number of unsheltered people has grown from 175,399 in 2014 to 194,467 in 2018.
Thomas W. Malone is the Patrick J. McGovern (1959) Professor of Management, a Professor of Information Technology
From The Wall Street Journal
Ask people about artificial intelligence, and the discussion will most often turn to jobs: which ones will be eliminated and which ones will be created.
But regardless of what happens to the number of jobs, there’s another question that is less often discussed but crucial for maximizing both productivity and employee morale: How is AI likely to change the structure of business hierarchies themselves?
The obvious answer may be that the management structure is likely to get more centralized and rigid. After all, AI will help managers track more detailed data about everything their subordinates are doing, which should make it easier—and more inviting—to exercise stricter controls.
This will no doubt be true in some cases. But look more closely, and I believe the opposite is much more likely to happen in many cases. That’s because when AI does the routine tasks, much of the remaining nonroutine work is likely to be done in loose “adhocracies,” ever-shifting groups of people with the combinations of skills needed for whatever problems arise.
John Sterman, Professor of Management and Director of MIT Sloan Sustainability Initiative at MIT Sloan School of Management
From Global GoalsCast
Is the zeitgeist shifting toward action to curb global warming and achieve the Sustainable Development Goals? Veteran Financial Times journalist Gillian Tett joins Edie Lush and Claudia Romo Edelman to consider that question in the aftermath of the United Nation’s climate summit and General Assembly. While the actions of governments were disappointing, they see a new attitude among many businesses, who were far more engaged in UN activity this year. “The balance of risks in the eyes of many business executives have shifted,” says Tett. Many executives now think it is “riskier to stand on the sidelines and do nothing than to actually be involved in some of these social and climate change movements,” Tett reports. The challenge now is not whether to act but how. Edie completes her visit with Professor John Sterman at MIT, whose En-Roads computer model of the climate lets Edie identify policy actions that will hold contain heating of the atmosphere. “The conclusion here is it is, technically, still possible to limit expected warming to 1.5” degrees Celsius, Sterman concludes..
John D. Sterman is the Jay W. Forrester Professor of Management at the MIT Sloan School of Management and a Professor in the MIT Institute for Data, Systems, and Society. He is also the Director of the MIT System Dynamics Group and the MIT Sloan Sustainability Initiative.
Maxwell T. Boykoff is an Associate Professor in the Cooperative Institute for Research in Environmental Sciences Center for Science and Technology Policy Research at the University of Colorado, Boulder.
Bradley Tusk is a venture capitalist and CEO and founder of Tusk Ventures.
Laura is a global expert on corporate sustainability, with two decades of experience in strategy consulting.
Gillian Tett is chair of the editorial board and editor-at-large, US of the Financial Times.
David Rand, Associate Professor of Management Science and Brain and Cognitive Sciences, MIT Sloan School of Management
From The New York Times
Expressions of moral outrage are playing a prominent role in contemporary debates about issues like sexual assault, immigration and police brutality. In response, there have been criticisms of expressions of outrage as mere “virtue signaling” — feigned righteousness intended to make the speaker appear superior by condemning others.
Clearly, feigned righteousness exists. We can all think of cases where people simulated or exaggerated feelings of outrage because they had a strategic reason to do so. Politicians on the campaign trail, for example, are frequent offenders.
So it may seem reasonable to ask, whenever someone is expressing indignation, “Is she genuinely outraged or just virtue signaling?” But in many cases this question is misguided, for the answer is often “both.”
You may not realize it, but distinguishing between genuine and strategic expressions of indignation assumes a particular scientific theory: namely, that there are two separable psychological systems that shape expressions of moral outrage. One is a “genuine” system that evaluates a transgression in light of our moral values and determines what level of outrage we actually feel. The other is a “strategic” system that evaluates our social context and determines what level of outrage will look best to others. Authentic expressions of outrage involve only the first system, whereas virtue signaling involves the second system.