Fairness in Automated Decision-Making—FairADM

Research question/goal: 

Artificial intelligence offers many opportunities to address complex societal problems. In the public sector, artificial intelligence is increasingly being used for automated decision-making (ADM) and promises to enhance government efficiency by automating bureaucratic processes. Eliminating human judgement, ADM promises to find the right decisions in shorter time and to be neutral and objective. At the same time, however, concerns are raised that ADM may foster discrimination or create new biases. Most of the findings on algorithmic fairness and discrimination stem from the U.S. context, with a strong focus on the technical aspects of the algorithms underlying the decision processes. Very little attention has been paid to the societal mechanisms and the specific decision-making context when evaluating the algorithms. To close this research gap, the proposed project aims to systematically investigate and classify ADM practices in the public sector in Germany. The project integrates previous research on algorithmic fairness with a sociological perspective on inequality and discrimination. To investigate fairness and discrimination in a real-world scenario, the project develops an ADM system using labour market data and evaluates it regarding different fairness aspects.

Current stage: 

A recent focus of our research has been the implementation of algorithmic profiling systems using German administrative data and the analysis of their fairness implications. This included prediction performance and fairness audits under consideration of different modelling decisions. The results of these analyses were presented at various conferences and workshops and have been submitted for publication.

Fact sheet

Funding: 
Baden-Württemberg Stiftung
Duration: 
2020 to 2023
Status: 
ongoing
Data Sources: 
administrative labor market records
Geographic Space: 
Germany

Publications