From LSE Business Review
As artificial intelligence (AI) and machine learning techniques increasingly leave engineering laboratories to be deployed as decision-making tools in Human Resources (HR) and related contexts, recognition of and concerns about the potential biases of these tools grows. These tools first learn and then uncritically and mechanically reproduce existing inequalities. Recent research shows that this uncritical reproduction is not a new problem. The same has been happening among human decision-makers, particularly those in the engineering profession. In AI and engineering, the consequences are insidious, but both cases also point toward similar solutions.
Bias in AI
One common form of AI works by training computer algorithms on data sets with hundreds of thousands of cases, events, or persons, with millions of discrete bits of information. Using known outcomes or decisions (what is called the training set) and the range of available variables, AI learns how to use these variables to predict outcomes important to an organisation or any particular inquirer. Once trained by this subset of the data, AI can be used to make decisions for cases where the outcome is not yet known but the input variables are available.
For example, AI can “learn” using a training set that includes curriculum vitae data from job applicants and information describing already hired employees. After learning, AI can select applicants who have not yet been hired but who look like the kinds of persons the firm is most likely to hire based on applicants’ CVs. AI tools are being developed and adopted to help with decisions in hiring, promotion, college admission, and more.
Assuming that the existing patterns of selection decisions have been conducted fairly and correctly, AI can be an efficient aid in the decision process. If, however, the human-based selection processes upon which AI is trained are somehow flawed or biased, then not only will AI be unable to detect and fix those biases, the AI will actually encode and perpetuate those flaws. In this way, AI reproduces patterns of discrimination.
These machine-learning algorithms are designed to replicate with increasing accuracy the outcome decisions in its training data. Once learned by the AI program, the biases represented within the learning data set get codified and applied to incoming data for new decisions. Once embedded in the program’s procedures for sorting through applicants, these biased practices appear to be neutral outcomes of an ostensibly objective process. Once encoded, these biases are even harder to detect and correct. The algorithm relies on and reproduces the past practices.
Bias in engineering
In our longitudinal study of men and women pursuing engineering careers, we found what at first appeared to be an odd contradiction, which upon further analysis revealed a similar pattern of reproducing embedded preferences. First, most of the women in our study reported first-hand experiences of sexist treatment and exclusion by classmates, professors, internship co-workers and supervisors, and more. Second, despite these reported experiences, the women continued to express their belief that the engineering profession to be a meritocracy that accurately recognises and rewards hard work and achievement. Although their own personal observations provided abundant evidence that engineering is not an objective meritocracy, they embraced the idea that it is. How does this happen?
Meritocracy – the idea that the best ideas and the best work win out – is an important value in the engineering profession. Part of the informal training of engineers via socialisation reproduces the belief that the profession is a meritocracy. While it may be meritocratic in many ways, it is an imperfect meritocracy composed of imperfect humans. But accepting the idea that engineering is a meritocracy necessarily means that the observed under-representation of women (and people of colour) in engineering must be the result of fair and meritocratic processes. Like the AI example, the possible role of bias in creating the current state is neglected. Instead, the current state is taken as the implicit standard.
Read the full post at LSE Business Review.
Susan Silbey is the Leon and Anne Goldberg Professor of Humanities, professor of sociology, anthropology, and professor of behavioral and policy sciences at the MIT Sloan School of Management.
Brian Rubineau is an associate professor of organizational behavior at the Desautels Faculty of Management at McGill University.
Erin Cech is an assistant professor of sociology at the University of Michigan.
Carroll Seron is a professor in the department of criminology, law and society at the University of California, Irvine.