Fairness in Automated Decision-Making—FairADM

Research question/goal: 

In the rapidly evolving landscape of algorithmic decision-making (ADM), questions surrounding its potential reinforcement of social inequality have gained considerable attention. The project investigated critical aspects of ADM, focussing on its profound implications for the social sciences and the broader societal landscape.

Our main goal was to address the relationship between ADM and social inequality, algorithmic fairness, and distributive justice. In a first paper, we demonstrated the potential of the social sciences to enrich the discourse on ADM by emphasising the importance of uncovering and mitigating biases in training data, understanding data processing and analysis, and exploring the social contexts in which algorithms operate. In a second paper, we introduced a crucial distinction between algorithmic fairness and distributive justice in data-driven decision-making, fostering a systematic investigation of their interplay. We proposed the concept of ‘error fairness’ as a new measure of algorithmic fairness and provided arguments for the explicit inclusion of distributive justice principles in allocation decisions. In a third paper, we evaluated the practical application of ADM in public employment services, with a focus on predicting jobseekers' risk of long-term unemployment. We emphasised the significance of transparent modelling decisions and systematic evaluations in the implementation of statistical profiling techniques.

Collectively, these papers highlight the crucial role of the social sciences in mitigating the unintended consequences of ADM. They argue for a holistic understanding of fairness and justice in algorithms that goes beyond mere predictive accuracy. ‘Error fairness’ offers a novel perspective on evaluating fairness in algorithms, emphasising that prediction errors should not systematically differ across individuals.

In conclusion, our project shows ways for the social sciences to contribute to a more fairer use of ADM. By addressing biases, understanding the interplay of fairness and justice, and emphasizing transparency, we provide valuable insights to gain a comprehensive understanding of the social impacts of algorithmic decision-making.

Fact sheet

Funding: 
Baden-Württemberg Stiftung
Duration: 
2020 to 2023
Status: 
completed
Data Sources: 
administrative labor market records
Geographic Space: 
Germany

Publications