Artificial Intelligence: what it is and why you should care

Posted by Hannah Couchman on 23 May 2018

We are living in the future and Artificial Intelligence (AI) is all around us.

Amazon’s Echo is the font of all knowledge. Facebook can recognise people’s faces in photos. And these days it’s hard to buy a household appliance that doesn’t think for itself.

But what exactly is AI, and why should we be wary of its rapid development and wholesale roll-out?

What is AI?

Artificial Intelligence is the name given to computer programs that think in a way that mimics human intelligence. For example, Facebook uses AI to show you content it thinks you will find interesting.  

And programs can also ‘learn’ and improve – like Gmail’s spam filter, which recognises spam emails more accurately the more it looks at them.     

What’s the problem?

AI has the potential to improve our lives – but advances are being made so quickly the law is struggling to keep up.

And there are significant concerns about how AI will impact our human rights – or how it already has.

Bias

One of the problems with a program mimicking human intelligence is that it picks up human prejudices – so AI programs are not, in fact, objective and their use can lead to discriminatory outcomes.     

Durham Police uses the AI Harm Assessment Risk Tool to decide whether a person is likely to commit crime – but it assesses information including postcodes, which can exacerbate existing biases in criminal justice decision-making.

Durham even paid Experian to use their credit profiling information – which, alarmingly, suggests it uses information about a person’s financial situation to assess their criminality. 

It has been reported that, by design, the tool is more likely to classify someone as medium or high risk “to avoid releasing suspects who may commit a crime”.

Liberty asked how often a human is involved in the decision. Durham declined to comment.

Facial recognition software is another example of biased AI. It’s a lawless invasion of privacy, being rolled out in the UK without any oversight or legal framework.

In February, a US study found this technology was much more likely to misidentify women and BAME people.

Privacy

AI systems generally rely on big data – they are trained using enormous sets of personal information.

The use of datasets should always rely on informed consent and uphold individual privacy – but the rush to develop AI means this isn’t always respected.

Last year London’s Royal Free Hospital handed the sensitive medical data of 1.6 million people over to Google’s DeepMind, without patients’ knowledge or consent.

As AI infiltrates our day-to-day activities, we’re losing our ability to keep things private and understand how our data is used.

Everyday objects are now connected to the internet and ‘talking’ to each other. This means AI can collect even more information about us – not only from our computers and phones but from our cars and fridges.

This is tantamount to being watched as we go about our everyday lives. It leads to a “chilling effect” where we start to self-monitor our behaviour – curbing our free expression and damaging democratic society.

What needs to be done?

The Data Protection Bill recently passed through Parliament, and will soon become law. Liberty joined other civil liberties organisations in calling for protection from automated decisions when our fundamental rights are at stake – but it was defeated.

We will continue to fight for these protections. There doesn’t need to be tension between our rights and scientific progression as we enter this brave new world – but there will be unless we stand up for our rights now.

Software must be designed with privacy, freedom of expression, accountability and civil liberties as key principles.

Programs need to be thoroughly tested and deployed with rigorous oversight to prevent the existence of prejudice – and AI must never be the sole basis for a decision which affects someone’s human rights.

Most importantly, the Government, police and other agencies must ensure we all know how the AI they use works, how our rights will protected and how we can challenge the decisions made about us.

Hannah Couchman Liberty

Hannah Couchman

Liberty
Advocacy and Policy Officer