Bias and belief in meritocracy in AI and engineering – Susan Silbey, Brian Rubineau, Erin Cech, Carroll Seron

Susan Silbey, Leon and Anne Goldberg Professor of Humanities, Professor of Behavioral and Policy Science, MIT Sloan School of Management

From LSE Business Review 

As artificial intelligence (AI) and machine learning techniques increasingly leave engineering laboratories to be deployed as decision-making tools in Human Resources (HR) and related contexts, recognition of and concerns about the potential biases of these tools grows. These tools first learn and then uncritically and mechanically reproduce existing inequalities. Recent research shows that this uncritical reproduction is not a new problem. The same has been happening among human decision-makers, particularly those in the engineering profession. In AI and engineering, the consequences are insidious, but both cases also point toward similar solutions.

Bias in AI

One common form of AI works by training computer algorithms on data sets with hundreds of thousands of cases, events, or persons, with millions of discrete bits of information. Using known outcomes or decisions (what is called the training set) and the range of available variables, AI learns how to use these variables to predict outcomes important to an organisation or any particular inquirer. Once trained by this subset of the data, AI can be used to make decisions for cases where the outcome is not yet known but the input variables are available.

Read More »