As the technology behind artificial intelligence (AI) has advanced, the scope for its deployment has dramatically widened. More businesses are relying upon AI for an increasingly diverse array of processes, from HR to financial services, bringing dynamism and efficiency to areas that have previously been time and resource hungry. However, for all its benefits, AI is not without its limitations. And one of the most pressing technology issues of the moment is bias in AI.

 

The problem of AI bias

One of the benefits of AI is its potential to remove bias from sensitive areas. Computers can’t think, they can’t hold opinions. Unfortunately, however, it’s becoming increasingly apparent now that although AI can’t itself be inherently biased, the data that is used to train AI can be, which allows for the negative manifestation of human bias in AI decision-making processes.

This bias has already been well documented. Amazon’s sexist recruitment AI system was famously scrapped in 2018 and Google’s photo algorithm was found to be racist. As was America’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software. The problem is that, whether or not we acknowledge it or are even aware of it, we all carry our own personal biases. This means that there is the potential for every AI system to be influenced by the biases of the people populating it.

 

How does AI become biased?

AI can be – and is being – used in an enormous array of scenarios. From manufacturing and production to security and surveillance. It can make ‘informed’ decisions faster than any human possibly could. In healthcare, we can now rely upon machines for the diagnosis of cancer and this can be done with an astonishing degree of accuracy. It can only happen because these machines have been trained on swathes of data previously identified by human programmers and it’s this human element that has become AI’s Achilles heel when it comes to bias.

Without the data, AI becomes impossible. With the data, the results are becoming progressively more complex – AI is now capable of identifying a person’s race based solely on an x-ray, something that medical professionals cannot do. The problem is that this enables unintended bias to creep in.   The machines can only predict based on their training – And if the training data is biased, so are future predictions.

 

Can we debias AI?

With human ‘labelled data’ being used to train AI models it is easy for bias to find its way in. There is – currently – no way around that problem. But if we also build explainability into AI systems, we can start to create a solution.

Explainable AI (XAI) is essentially creating a process whereby we can ask software why it came to a certain decision and what it based its results on. If the reasoning is faulty – or biased – we’re then equipped with the ability to not only override the decision on a case-by-case basis but to correct the data and the methodology, so the mistake is not repeated.

With explainability, we not only begin to remove bias from AI. But we have the power to justify decisions, which can be integral in all areas. Whether it’s an employee challenging a decision not to award a pay raise, or a borrower querying why their application has been declined. It creates a state of transparency which removes the potential for bias and abuse. And with the EU’s data protection regulation, GDPR, already stating that everyone has a right to have automated decisions explained, it’s a scenario that is becoming all the more pressing to establish.

AI is too useful a tool to abandon. Despite its growing prevalence, we’ve still only scratched the surface of what it can achieve. But if we are going to continue to use it, to allow the technology to reach its full potential, we have to use it correctly and ethically and this can only be achieved if debiasing remains a key priority.

 

Author – Nigel Cannings CTO at Intelligent Voice

Nigel Cannings

Awards

Discover our award programs today!

See our awards

Magazine

Take a look at our latest issues!

See magazines