HOW THE DIGITAL WELFARE STATE THREATENS OUR RIGHTS

Posted by Joseph Maggs on 22 October 2019

Last week, the UN Special Rapporteur on extreme poverty and human rights, Philip Alston, published his long-awaited report on the emergence of “digital welfare states” across the globe – including the UK.

Building on a visit to the UK last November, he concludes that our Government is rapidly utilising new technologies to digitise and automate the provision of welfare benefits and essential services, with significant implications for our human rights.

Computer algorithms – complex data-processing tools – are being used to determine people’s eligibility for essential services and to calculate their benefit entitlements. They are also being used to profile people according to their apparent “risk” of committing welfare fraud and other abuses.

This is all taking place in what the Special Rapporteur calls an “almost human rights-free zone” where transparency, oversight and regulation is sorely lacking.

At Liberty, we have long fought against the human rights violations that go hand in hand with poverty. The new digital welfare state raises a host of new and deeply troubling human rights threats. If left unchallenged, it will:

1 UNDERMINE PRIVACY AND DATA PROTECTION RIGHTS

These systems rely on “big data” – via the mining, matching and sharing of huge quantities of personal information across different services and Government departments. They are touted as a panacea for a range of problems across the public sector. But they also risk maximising the State’s ability to exercise control over us.

To take a typical example, an algorithm used by Bristol City Council processes personal data from a range of public bodies – including hospitals, schools and the police – in order to assess the “vulnerability” of around 170,000 people, around a quarter of the city’s population.

While protecting vulnerable people is without doubt a laudable aim, this use of data represents a serious invasion of privacy, and may amount to a form of surveillance. More and more of our interactions with the state are logged and accessible to different public – and sometimes private – bodies. We have little control over this data and how it is being used to predict our behaviour, and to monitor and make decisions about our day-to-day lives. This includes which kinds of State intervention we are subject to, and whether we can or cannot access essential services and support.

In the digital welfare state, as the Special Rapporteur argues, “citizens become ever more visible to their governments, but not the other way round”.

2 ENTRENCH INEQUALITY AND DISCRIMINATION

Digital profiling and surveillance disproportionately impacts those who already interact most frequently with the State. For those most in need of social security – people in poverty, disabled people and older people – access to essential services is becoming conditional on surrendering the right to privacy.

This could also have a chilling effect on over-policed black and minority ethnic communities, who may reduce their interactions with the State to avoid producing data points that may later be used against them. Further, biased datasets can create feedback loops that entrench existing patterns of discrimination in society.

There is also the issue of digital access. The Government’s flagship streamlined benefits system, Universal Credit, is “digital-by-default” – or, more accurately, “digital only”. This makes it exclusionary by design for the millions of people in the UK for whom the internet is not an accessible option. Life-saving safety nets are being withdrawn from them through no fault of their own.

3 ERODE ACCOUNTABILITY

The UN inspector argues that the digital welfare state constitutes “a complete reversal of the traditional notion that the state should be accountable to the individual”.

A recent Guardian investigation found that 1 in 3 local authorities in the UK are using algorithms to make welfare decisions. Algorithms, especially those with elements of artificial intelligence, are “black boxes” – the logic behind their decisions is difficult to understand and therefore challenge. 

When they are present at all, it is questionable whether human decision-makers are still in control of this tech. Misplaced trust in the recommendations of an algorithm – a phenomenon known as “automation bias” – can mean that human discretion and professional judgement is surrendered entirely.

This accountability gap is compounded by the prominence of private companies in designing, building and sometimes operating these systems. There is little to no scrutiny of these public-private partnerships, and commercial secrecy is too often being used to stifle freedom of information.

PREVENTING A DIGITAL WELFARE DYSTOPIA

The Government – backed by the tech industry – is trying to persuade us that the technological changes it imposes from above are both benevolent and inevitable.

But the UN inspector warns that we are “stumbling zombie-like into a digital welfare dystopia” and endangering our fundamental rights in the process.

Calls for greater transparency, oversight and regulation presuppose that the roll-out of this tech is inevitable – and that resistance is futile. The reality is that in certain contexts some technologies can never be human rights-compliant and should therefore have no place in a rights-respecting society.

Liberty will continue to actively monitor the growing use of technologies to automate the provision of basic public services, and fight against any use which put our rights at risk.

Joseph Maggs

Liberty
Policy and Campaigns Intern