How Predictive Algorithms Elevate Systemic Inequality in European Countries
With case examples from France and Sweden
Introduction
Over the last year and a half, several revealing investigations have surfaced about the use of discriminatory fraud detection algorithms in Europe.
As a method of combatting fraud, a common practice for social insurance agencies is to rely on algorithmic systems to automatically select welfare beneficiaries for fraud investigations who are perceived as being of “high risk”. These decisions are based on algorithmic parameters alone, stripped of all human judgment, and can lead to uncomfortable investigations with life-altering impacts for the individuals and families affected, especially if their welfare payments are paused while the investigations are ongoing.
A cautionary tale regarding the practice of algorithmic fraud detection occurred some years ago when it was brought to the public’s attention that the Dutch tax authorities had falsely labelled thousands of parents as fraudulent tax beneficiaries between 2013 and 2019 due to dysfunctional self-learning algorithms. The incorrect, algorithmic decisions resulted in financial hardships and devastating consequences for many of the families involved as they were forced to repay what was in fact justified childcare payments. The Dutch government collectively resigned in the wake of the scandal. It was that serious.
As of today, similar algorithmic systems are used by public authorities all over Europe, notably in France, Serbia, Denmark, Sweden, Spain, and the Netherlands (still) to detect welfare fraud based on a cross-referencing of personal information about citizens, including sensitive information, at a massive scale. The systems operate completely outside of the public eye’s field of vision and there is no possibility to obtain information about how the systems work. We likely wouldn’t know about them at all if it weren't for the dedicated and relentless efforts of civil rights organizations, newsrooms, NGOs, and privacy watchdogs.
Applying these algorithmic fraud detection systems is at odds with European human rights laws, the principles of GDPR, the prohibition against automatic decision-making in GDPR Article 22, and it could even be considered that the systems carry out social scoring which would count as a “prohibited practice” under the AI Act. If the systems are not directly prohibited under the AI Act, the social service agencies would have to oblige with the strict requirements for "high-risk systems".
In this week’s post, we will take a look at how algorithmic fraud detection systems are applied in France and Sweden, and finally consider the legal perspective.
Keep reading with a 7-day free trial
Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.