Fairness in Automated Decision-Making—FairADM

Research question/goal: 

Artificial intelligence offers many opportunities to address complex societal problems. In the public sector, artificial intelligence is increasingly being used for automated decision-making (ADM) and promises to enhance government efficiency by automating bureaucratic processes. Eliminating human judgement, ADM promises to find the right decisions in shorter time and to be neutral and objective. At the same time, however, concerns are raised that ADM may foster discrimination or create new biases. Most of the findings on algorithmic fairness and discrimination stem from the U.S. context, with a strong focus on the technical aspects of the algorithms underlying the decision processes. Very little attention has been paid to the societal mechanisms and the specific decision-making context when evaluating the algorithms. To close this research gap, the proposed project aims to systematically investigate and classify ADM practices in the public sector in Germany. The project integrates previous research on algorithmic fairness with a sociological perspective on inequality and discrimination. To investigate fairness and discrimination in a real-world scenario, the project develops an ADM system using labour market data and evaluates it regarding different fairness aspects.

Current stage: 

One focus of our research was investigating the potentials of a sociological perspective on fairness in automated decision-making, particularly from a distributive justice point of view. Results of this research were presented at various conferences and are submitted for publication. Our current work aims at detecting and correcting for biases in an empirical application of algorithmic profiling, which will be complemented by stakeholder interviews.

Fact sheet

Funding: 
Baden-Württemberg Foundation
Duration: 
2020 to 2022
Status: 
ongoing
Data Sources: 
administrative labor market records
Geographic Space: 
Germany

Publications