Lea Cohausz, Jakob Kappenberger, Heiner Stuckenschmidt
Combining fairness and causal graphs to advance both

Pp. 1-14 in: Roberta Calegari, Virginia Dignum, Barry O'Sullivan (Eds.): Fairness and Bias in AI : Proceedings of the 2nd Workshop on Fairness and Bias in AI, co-located with 27th European Conference on Artificial Intelligence (ECAI 2024). 2024. Aachen: RWTH Aachen

Recent work on fairness in Machine Learning (ML) demonstrated that it is important to know the causal relationships among variables to decide whether a sensitive variable may have a problematic influence on the prediction and what fairness metric and potential bias mitigation strategy to use. These causal relationships can best be represented by Directed Acyclic Graphs (DAGs). However, so far, there is no clear classification of different causal structures containing sensitive variables in these DAGs. This paper’s first contribution is classifying the structures into four classes, each with different implications for fairness. However, we first need to learn the DAGs to uncover these structures. Structure learning algorithms exist but currently do not make systematic use of the background knowledge we have when considering fairness in ML, although the background knowledge could increase the correctness of the DAGs. Therefore, the second contribution is an adaptation of the structure learning methods. This is evaluated in the paper, demonstrating that the adaptation increases correctness. The two contributions of this paper are implemented in our publicly available Python package causalfair, allowing everyone to evaluate which relationships in the data might become problematic when applying ML