Susan Silbey, Leon and Anne Goldberg Professor of Humanities, Professor of Behavioral and Policy Science, MIT Sloan School of Management
From LSE Business Review
As artificial intelligence (AI) and machine learning techniques increasingly leave engineering laboratories to be deployed as decision-making tools in Human Resources (HR) and related contexts, recognition of and concerns about the potential biases of these tools grows. These tools first learn and then uncritically and mechanically reproduce existing inequalities. Recent research shows that this uncritical reproduction is not a new problem. The same has been happening among human decision-makers, particularly those in the engineering profession. In AI and engineering, the consequences are insidious, but both cases also point toward similar solutions.
Bias in AI
One common form of AI works by training computer algorithms on data sets with hundreds of thousands of cases, events, or persons, with millions of discrete bits of information. Using known outcomes or decisions (what is called the training set) and the range of available variables, AI learns how to use these variables to predict outcomes important to an organisation or any particular inquirer. Once trained by this subset of the data, AI can be used to make decisions for cases where the outcome is not yet known but the input variables are available.
Fellow, MIT Center for Digital Business, Tom Davenport
From the MIT Sloan Management Review
As artificial intelligence-enabled products and services enter our everyday consumer and business lives, there’s a big gap between how AI can be used and how it should be used. Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products.
Ethical issues with AI can have a broad impact. They can affect the company’s brand and reputation, as well as the lives of employees, customers, and other stakeholders. One might argue that it’s still early to address AI ethical issues, but our surveys and others suggest that about 30% of large companies in the U.S. have undertaken multiple AI projects with smaller percentages outside the U.S., and there are now more than 2,000 AI startups. These companies are already building and deploying AI applications that could have ethical effects.
Many executives are beginning to realize the ethical dimension of AI. A 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organizations don’t yet have specific approaches to deal with AI ethics. We’ve identified seven actions that leaders of AI-oriented companies — regardless of their industry — should consider taking as they walk the fine line between can and should.
MIT Sloan Senior Lecturer, Tara Swart
Have you heard of legacy code? In her article in the Financial Times, Lisa Pollack reveals how this has become a growing issue for businesses engaged in a process of modernizing their software and systems, with many large organizations and government departments’ websites relying on code headed for the “digital dustbin.” Archaic languages, such as Cobol, created 50 years ago, are a threat to progress not only for the direct and obvious inconvenience of them (Pollack describes how the Pentagon was relying on the use of eight-inch floppy disks to coordinate its intercontinental ballistic missiles and nuclear bombers, for example) but also for all the indirect cascade of side effects that legacy code may trigger unpredictably in other parts of a system when you try to change or overwrite any part of it. To give an example, this could mean that a successful update, or overwriting, of some code for the purchasing part of a business within its ERP system may cause unpredictable and unexpected collateral damage elsewhere, as part of a complex technological butterfly effect that demonstrates the interconnectedness of all system. This is known as code fragility.
No department is an island, including yours
The article got me thinking again about my OPI model, and more generally about the complex ecosystem that’s at work in any business of any size. No department functions in isolation, and no department can be rebuilt or improved upon successfully if it is treated as an island.
MIT Sloan Visiting Lecturer Irving Wladawsky-Berger
Episode 72 of Voices in AI features host Byron Reese and Irving Wladawsky-Berger discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us. Irving has a PhD in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise.
Here is the podcast transcript:
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Irving Wladawsky-Berger. He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management. He is a guest columnist for the Wall Street Journaland CIO Journal. He is an adjunct professor of the Imperial College of London. He is a fellow for the Center for Global Enterprise, and I think a whole lot more things. Welcome to the show, Irving.
Irving Wladawsky-Berger: Byron it’s a pleasure to be here with you.
So, that’s a lot of things you do. What do you spend most of your time doing?
Well, I spend most of my time these days either in MIT-oriented activities or writing my weekly columns, [which] take quite a bit of time. So, those two are a combination, and then, of course, doing activities like this – talking to you about AI and related topics.
MIT Sloan Professor Thomas Kochan
From The Conversation
The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.
While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.
Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”
The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.