When machines make decisions about people

This article first appeared in the current Civil Society Atlas. The Report on the Global Status of Fundamental Rights is published annually by Bread for the World and this year addresses the question of the role that digitalization plays in securing or restricting these freedoms.

Is it permissible to classify the unemployed according to their age, gender, or the number of children they have, and then – according to their category – they are deprived of further training? In Austria, this question will be with the Supreme Administrative Court at the beginning of 2021. It is he who decides whether Austrian Labor Market Services (AMS), which are comparable to German job centers, may in the future use a computer system throughout Austria that automatically sorts out the unemployed It is about working into groups and thus supports the advisors in their decision on how to proceed.

Technically, it’s not complicated: the job market opportunity model, as it’s officially called, divides job seekers into three categories. It was developed by a Vienna company that supplied the system with labor market data from previous years, including gender, age, nationality, place of residence or potential care obligations, i.e. children or relatives to be cared for. The system makes decisions for new job seekers, which is a prediction of how likely they are to find work again within a certain period of time. Appropriate support, such as further training, is funded only for those whose chances are reasonable. The human becomes a statistical probability.

Can a country treat its citizens this way?

Technically, that might be easy. Ethically and legally, AMS procedures raise difficult questions. Can a country deprive its population of assistance based on such predictions? And should the algorithm, i.e. automated decision making, judge it? Whether someone is promoted or given up more or less has a huge impact on their subsequent life.

Answers to such questions have far-reaching consequences. Not only for each individual, but also for civil society as a whole. In Austria, scholars and civil rights organizations have criticized the system from the start. In her view, it discriminates against those who are already disadvantaged in the labor market. Older people or women are deducted per se – the latter more if they have children. Men are not affected.

Is this sexist? discriminatory? No, says Johannes Kopf, president of AMS. The system only reflects the real conditions in the labor market. Mathematician Paula Lopez, who is researching this topic, describes the AMS algorithm as a “discrimination scale” that can reasonably be used—to provide special support to those who are most disadvantaged (36).

No one thinks about the rights of those affected

Thus, the people most affected by such decisions are often members of marginalized and socially disadvantaged groups. In any case, they find it more difficult than others to participate in decision-making processes, to participate in equal treatment or social security, or to access information. Automated decision-making processes make it more difficult for them to provide the material foundation for a life that they define themselves.

A bitter battle has been raging in Austria for more than a year. After a testing process, AMS actually wanted to submit the form nationwide at the beginning of 2021. Then the data protection authority stepped in. AMS serves to “identify relevant intervention features” of people, for which a legal basis must first be established. The authority had to stop the experimental operation. Shortly thereafter, the Federal Administrative Court overturned the ban. In order to assess the need for support, AMS may well enter the data of the job seekers into the form, it has ruled in December 2020. Only decision automation is strictly prohibited. This in theory paves the way for the use of the system.

Agency officials repeatedly stress: nothing is determined automatically here. The system only serves to support the consultants in making their decisions, and in the end one person always decides. This does not convince the critics. They point to studies that show that when a machine broadcasts a specific diagnosis, people usually don’t ignore it.

The fact that you know a lot about the AMS algorithm is an absolute special feature. Because, as a rule, automated decision-making processes involve a completely different problem: they are a black box for those affected, incomprehensible from the outside. The AMS algorithm is known because the responsible company has published the formula, at least as an example for some cases.

But the example also shows that transparency alone is not enough. Opportunities are also needed to object. What is the use of a job seeker in Austria if it is publicly documented that the system harms him? However, it has no way of opting out of the assessment or contesting the forecast.

Suspicious by Algorithm

So human rights organizations and activists are demanding clear rules about when these systems can be used at all – whether by companies or states. The AMS controversy has become a case study. What can go wrong when public authorities work with automated predictions to enable or deny access to services. They are currently in a legal gray area.

For example, the German financial regulator BaFin controls the use of algorithms for high-speed trading on the stock exchange. But if the municipality or the authority decides, for example, to use automated processes to track down social fraudsters and undeclared work, that is, to use algorithms against people, then no one will verify them.

That’s what happened in the Netherlands, where the Ministry of Social Affairs has used a program called SyRi, short for Systeem Risico Indicatie, to analyze all kinds of sensitive social and registration data on people for years in order to flag those who may have wrongly received unemployment or housing benefits. Anyone who lit up red in their SyRi data analysis should have expected a home visit soon. Those affected were not informed of the suspicions against them. The ministry also refused to reveal exactly what data was analyzed and how the system reached its results.

The pattern of using such systems against those who are already at a disadvantage is repeated here. The system was used mainly in societies that were considered problem areas: their populations were poor and they had a disproportionately large number of immigrants who depended mainly on state support. Only a court ruling in The Hague in early 2020 was able to end risk registration in Central Europe. The court found that the operation violated the European Convention on Human Rights. Because the program could discriminate against the poor and immigrants. Previously, activists and civil society organizations had fought in vain for years.

The legal framework is full of loopholes

Can companies and authorities let algorithms lose their customers and citizens altogether — even when it comes to important things like a loan or social benefits? In theory, there is already a legal framework that protects people in the European Union from such automated processes: if personal data is affected, European data protection rules will apply and prohibit so-called fully automated decisions. The two examples from Austria and the Netherlands show that this rule doesn’t have much consolation.

With the ‘Artificial Intelligence Act’, a set of rules for the so-called ‘artificial intelligence’ is getting closer and closer to home in the European Union. This should focus more on high-risk systems such as biometric face recognition in the future. But human rights organizations such as Human Rights Watch fear that algorithms related to Social Security will fall through loopholes in legislation.

(Reprinted with permission from Bread for the World. All rights reserved.)

Leave a Comment