From the MIT Sloan Management Review
As artificial intelligence-enabled products and services enter our everyday consumer and business lives, there’s a big gap between how AI can be used and how it should be used. Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products.
Ethical issues with AI can have a broad impact. They can affect the company’s brand and reputation, as well as the lives of employees, customers, and other stakeholders. One might argue that it’s still early to address AI ethical issues, but our surveys and others suggest that about 30% of large companies in the U.S. have undertaken multiple AI projects with smaller percentages outside the U.S., and there are now more than 2,000 AI startups. These companies are already building and deploying AI applications that could have ethical effects.
Many executives are beginning to realize the ethical dimension of AI. A 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organizations don’t yet have specific approaches to deal with AI ethics. We’ve identified seven actions that leaders of AI-oriented companies — regardless of their industry — should consider taking as they walk the fine line between can and should.
Make AI Ethics a Board-Level Issue
Since an AI ethical mishap can have a significant impact on a company’s reputation and value, we contend that AI ethics is a board-level issue. For example, Equivant (formerly Northpointe), a company that produces software and machine learning-based solutions for courts, faced considerable public debate and criticism about whether its COMPAS systems for parole recommendations involved racially oriented algorithmic bias. Ideally, consideration of such issues would fall under a board committee with a technology or data focus. Unfortunately, these are relatively rare, in which case the entire board should be engaged.
Some companies have governance and advisory groups made up of senior cross-functional leaders to establish and oversee governance of AI applications or AI-enabled products, including their design, integration, and use. Farmers Insurance, for example, established two such boards — one for IT-related issues and the other for business concerns. Along with the board, governance groups such as these should be engaged in AI ethics discussions, and perhaps lead them as well.
A key output of such discussions among senior management should be an ethical framework for how to deal with AI. Some companies that are aggressively deploying AI, like Google, have developed and published such a framework.
Promote Fairness by Avoiding Bias in AI Applications
Leaders should ask themselves whether the AI applications they use treat all groups equally. Unfortunately, some AI applications, including machine learning algorithms, put certain groups at a disadvantage. This issue, called algorithmic bias, has been identified in diverse contexts, including judicial sentencing, credit scoring, education curriculum design, and hiring decisions. Even when the creators of an algorithm have not intended any bias or discrimination, they and their companies have an obligation to try to identify and prevent such problems and to correct them upon discovery.
Ad targeting in digital marketing, for example, uses machine learning to make many rapid decisions about what ad is shown to which consumer. Most companies don’t even know how the algorithms work, and the cost of an inappropriately targeted ad is typically only a few cents. However, some algorithms have been found to target high-paying job ads more to men, and others target ads for bail bondsmen to people with names more commonly held by African Americans. The ethical and reputational costs of biased ad-targeting algorithms, in such cases, can potentially be very high.
Of course, bias isn’t a new problem. Companies using traditional decision-making processes have made these judgment errors, and algorithms created by humans are sometimes biased as well. But AI applications, which can create and apply models much faster than traditional analytics, are more likely to exacerbate the issue. The problem becomes even more complex when black box AI approaches make interpreting or explaining the model’s logic difficult or impossible. While full transparency of models can help, leaders who consider their algorithms a competitive asset will quite likely resist sharing them.
Most organizations should develop a set of risk management guidelines to help management teams reduce algorithmic bias within their AI or machine learning applications. They should address such issues as transparency and interpretability of modeling approaches, bias in the underlying data sets used for AI design and training, algorithm review before deployment, and actions to take when potential bias is detected. While many of these activities will be performed by data scientists, they will need guidance from senior managers and leaders in the organization.
Lean Toward Disclosure of AI Use
Some tech firms have been criticized for not revealing AI use to customers — even in prerelease product demos as with Google’s AI conversation tool Duplex, which now discloses that it is an automated service). Nontechnical companies can learn from their experience and take preventive steps to reassure customers and other external stakeholders.
A recommended ethical approach to AI usage is to disclose to customers or affected parties that it is being used and provide at least some information about how it works. Intelligent agents or chatbots should be identified as machines. Automated decision systems that affect customers — say, in terms of the price they are being charged or the promotions they are offered — should reveal that they are automated and list the key factors used in making decisions. Machine learning models, for example, can be accompanied by the key variables used to make a particular decision for a particular customer. Every customer should have the “right to an explanation” — not just those affected by the GDPR in Europe, which already requires it.
Also consider disclosing the types and sources of data used by the AI application. Consumers who are concerned about data misuse may be reassured by full disclosure, particularly if they perceive that the value they gain exceeds the potential cost of sharing their data.
While regulations requiring disclosure of data use are not yet widespread outside of Europe, we expect that requirements will expand, most likely affecting all industries. Forward-thinking companies will get out ahead of regulation and begin to disclose AI usage in situations that involve customers or other external stakeholders.
Tread Lightly on Privacy
AI technologies are increasingly finding their way into marketing and security systems, potentially raising privacy concerns. Some governments, for example, are using AI-based video surveillance technology to identify facial images in crowds and social events. Some tech companies have been criticized by their employees and external observers for contributing to such capabilities.
Financial services and other industries increasingly use AI to identify data breaches and fraud attempts. Substantial numbers of “false positive” results mean that some individuals — both customers and employees — may be unfairly accused of malfeasance. Companies employing these technologies should consider using human investigators to validate frauds or hacks before making accusations or turning suspects over to law enforcement. At least in the short run, AI used in this context may actually increase the need for human curators and investigators.
Read the full post at MIT Sloan School of Management.
Tom Davenport is a Fellow at the MIT Center for Digital Business.
Vivek Katyal is a Risk Analytics Leader at Deloitte & Touche LLP.