The digital age is impacting all aspects of life, including the future of work. Technological innovations have the potential to transform the workplace and enhance productivity, but it will take proactive and thoughtful discussion to harness these innovations for social benefit.
To explore this further, MIT Sloan Experts is hosting the #MITSloanBrazil Twitter chat on August 21 at 9 a.m. ET (10 a.m. São Paulo) to discuss the topics and themes of the upcoming Future of Work Conference in Brazil.
The conference, which will bring together leading experts from business and academia, aims to highlight the ways in which artificial intelligence, automation and the changing economy are affecting the future of work. This issue is crucial in Brazil, where 12 percent of the country’s workforce is unemployed.
Join us on Twitter on August 21 at 9 a.m. ET (10 a.m. São Paulo) and follow along using the hashtag #MITSloanBrazil. Your comments and questions are encouraged! Simply include #MITSloanBrazil in your Tweets.
Fellow, MIT Center for Digital Business, Tom Davenport
From BizEd Magazine
The rise of data analytics is one of the hallmarks of 21st-century business. By the turn of the century, companies had been accumulating data in various transaction systems for several decades, and many desired to analyze the data to make better decisions. Their interest intensified in the early 2000s as they saw the great success of online firms from Silicon Valley, many of which were highly analytical.
In fact, during the mid-2000s, I conducted research showing that some companies were “competing on analytics”— that is, emphasizing their analytical capabilities as a key element of their strategies—and that those companies tended to outperform other firms in their industries. Information about analytics even made it into popular culture, especially through books such as Moneyball, which was also a successful movie. Both depicted the way the Oakland A’s of California built a winning baseball team through targeted data analysis.
Susan Silbey, Leon and Anne Goldberg Professor of Humanities, Professor of Behavioral and Policy Science, MIT Sloan School of Management
From LSE Business Review
As artificial intelligence (AI) and machine learning techniques increasingly leave engineering laboratories to be deployed as decision-making tools in Human Resources (HR) and related contexts, recognition of and concerns about the potential biases of these tools grows. These tools first learn and then uncritically and mechanically reproduce existing inequalities. Recent research shows that this uncritical reproduction is not a new problem. The same has been happening among human decision-makers, particularly those in the engineering profession. In AI and engineering, the consequences are insidious, but both cases also point toward similar solutions.
Bias in AI
One common form of AI works by training computer algorithms on data sets with hundreds of thousands of cases, events, or persons, with millions of discrete bits of information. Using known outcomes or decisions (what is called the training set) and the range of available variables, AI learns how to use these variables to predict outcomes important to an organisation or any particular inquirer. Once trained by this subset of the data, AI can be used to make decisions for cases where the outcome is not yet known but the input variables are available.
Fellow, MIT Center for Digital Business, Tom Davenport
From the MIT Sloan Management Review
As artificial intelligence-enabled products and services enter our everyday consumer and business lives, there’s a big gap between how AI can be used and how it should be used. Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products.
Ethical issues with AI can have a broad impact. They can affect the company’s brand and reputation, as well as the lives of employees, customers, and other stakeholders. One might argue that it’s still early to address AI ethical issues, but our surveys and others suggest that about 30% of large companies in the U.S. have undertaken multiple AI projects with smaller percentages outside the U.S., and there are now more than 2,000 AI startups. These companies are already building and deploying AI applications that could have ethical effects.
Many executives are beginning to realize the ethical dimension of AI. A 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organizations don’t yet have specific approaches to deal with AI ethics. We’ve identified seven actions that leaders of AI-oriented companies — regardless of their industry — should consider taking as they walk the fine line between can and should.
Have you heard of legacy code? In her article in the Financial Times, Lisa Pollack reveals how this has become a growing issue for businesses engaged in a process of modernizing their software and systems, with many large organizations and government departments’ websites relying on code headed for the “digital dustbin.” Archaic languages, such as Cobol, created 50 years ago, are a threat to progress not only for the direct and obvious inconvenience of them (Pollack describes how the Pentagon was relying on the use of eight-inch floppy disks to coordinate its intercontinental ballistic missiles and nuclear bombers, for example) but also for all the indirect cascade of side effects that legacy code may trigger unpredictably in other parts of a system when you try to change or overwrite any part of it. To give an example, this could mean that a successful update, or overwriting, of some code for the purchasing part of a business within its ERP system may cause unpredictable and unexpected collateral damage elsewhere, as part of a complex technological butterfly effect that demonstrates the interconnectedness of all system. This is known as code fragility.
No department is an island, including yours
The article got me thinking again about my OPI model, and more generally about the complex ecosystem that’s at work in any business of any size. No department functions in isolation, and no department can be rebuilt or improved upon successfully if it is treated as an island.