There has been a 14-fold increase in the number of active AI start-ups since 2000 and with this number constantly rising, the ethical issues surrounding the use of artificial intelligence need to be addressed
Subscribe to our email newsletter
Artificial intelligence has become a ubiquitous term in the technology sector and while many companies profess to be using the new tech, few are tackling its ethical issues.
This is one area in which Dr Adrian Weller, director for AI at The Alan Turing Institute – Britain’s national institute for data science and AI – believes the UK is a global leader.
The Cambridge research fellow says: “While the UK has recently had a few other things on its mind, mostly relating to Europe, the UK has been admirably at the forefront of asking important questions about the use of AI technologies.
“Select committees in the House of Commons and Lords have gathered expert opinions and calls for consultations, which had a response from the government last year.”
One result of the select committee reports was the formation of the Office for Artificial Intelligence to manage national policy on the technology.
What is the Centre for Data Ethics and Innovation?
The UK can also claim to be home to the world’s first Centre for Data Ethics and Innovation.
The advisory body will provide recommendations to government on how to maximise the potential of data-driven technologies and AI while also considering the wider social and ethical impacts.
Speaking at the Garner Data Analytics Summit in London last week, Dr Weller says: “Its role is to assess the overall landscape and identify areas that need to be changed in order to foster the right kind of innovation.
“It is a new and original thing. I don’t think we’ve seen other countries do this yet, and it shows a thoughtful response to the question of AI ethics.”
As part of Chancellor Philip Hammond’s 2018 Budget statement, the advisory body was commissioned to investigate how data is used to influence people’s experiences online and the potential for bias in algorithmic decision-making.
It will report back to the government later this year.
What are the ethical issues surrounding the use of artificial intelligence?
As the technology is currently in its infancy, and the cost of implementing intelligent systems remains high, the full potential of AI is yet to be realised.
Despite this, several ethical conundrums have already made headlines.
Dr Weller says: “As we move out of use cases in the consumer landscape that don’t have such significant impacts on our lives and role as citizens, to use cases in business that have significant implications and consequences, ethical considerations need to be made.”
AI bias in recruitment
One of the current use cases for AI is in recruitment, where it is employed to filter thousands of applicants down to just a few to help make the interview process more efficient.
However, the AI is trained on past decisions made by humans and, if the user isn’t careful, it will replicate and reinforce any biases that exist in the data.
This topic came to the fore when an Amazon recruitment AI was scrapped after the machine learning algorithm was found to be penalising women.
The “sexist” AI had been trained on data from applicants over a ten-year period, the majority of which were submitted by men as a reflection of the male dominance in the tech industry.
The recruitment system taught itself to exclude CVs that mentioned the word “women” or referred to all-girls schools.
AI bias in criminal justice system
Another controversial use-case of AI is in the criminal justice system, where AI algorithms would give recommendations on the sentencing of convicted criminals.
Machine learning algorithms are trained using historical data to help judges decide on the length of sentences and whether to grant bail, based on the likelihood they will reoffend.
However, Dr Weller believes this could raise questions around “fairness and transparency of the process”.
In 2003, Christopher Drew Brooks, 19, was sentenced to ten years in prison after being convicted of statutory rape having had what he claimed to be “consensual sex” with his 14-year-old girlfriend.
In reaching the decision, the Virginia court raised the jail term sentence from the suggested seven to 16 month-punishment to 18 months, following the results of his risk score – which was calculated by an AI algorithm.
This algorithm took the offender’s age into consideration when determining Mr Brooks’ risk score.
It decided that being a younger age increased the likelihood of his reoffending – had Mr Brooks been 36 years old, 22 years older than the victim, the AI would have recommended that he not be sent to jail at all.
Dr Weller says: “AI itself is never directly responsible – the responsible agent is the company that is deploying the AI system.
“We need to remember that and make sure people are at least sufficiently educated to appreciate the limitations of this technology.”
Don’t believe the hype
Dr Weller believes the easiest way to mitigate the risks of using AI is to “appreciate its limitations”.
There are currently 850 companies offering “AI” solutions in London alone.
However, a report in the Financial Times revealed that 40% of the AI start-ups in Europe do not use any artificial intelligence programmes in their products.
Dr Weller says: “Clearly, there is a lot of hype around AI at the moment – but to some extent, expectations may be getting ahead of what is feasible.
“There’s a lot of excitement about AI and as many will know, start-ups are strapping on the term ‘AI’ to describe any computer process.
“You need to be careful and have some experts to scrutinise what is technically going on under the surface.”
It’s hoped the UK’s Centre for Data Ethics and Innovation will help make sure that regulation keeps apace with the ever-growing number of use cases for artificial intelligence so these issues can be resolved.