The number of use cases for artificial intelligence (AI) is increasing rapidly, mainly due to larger data sets, faster compute and higher processing power. In a 2018 McKinsey report, there were 400 practical use cases for AI across 19 industries, a number that has most certainly grown since then.
With AI systems penetrating more and more facets of our daily lives – from deciding the adverts we see to determining if we’re eligible for a bank loan to having a hand in the success of our job applications – it is imperative that these models are making unbiased decisions. Currently, there is a lot of controversy around whether companies are accidentally using AI that is resulting in unethical outcomes.
For example, Amazon had been developing and trialling a hiring tool since 2014, which used AI to automatically rate and rank potential job candidates. However, for technical posts such as software development, the system had taught itself to show bias against female applicants, even though Amazon had edited the programs to make them neutral to gender terms. This unwelcome development had taken place because the computer models were trained to vet applicants by observing patterns in resumes submitted over the last ten years – and most tech candidates during this time had been male. Unable to get around this hurdle, the project was abandoned at the start of 2017.
Given that AI is driving more and more life-altering decisions, how can we be sure the outcomes of these decisions are fair for every person involved, every time?
Introducing bias through confounding variables
A common misconception is that AI models will be neutral if factors such as age, gender and race are excluded as data inputs. This is not true. To achieve a specific objective, machine learning models will always find ways to link data points together. When

View Entire Article on