Lea Cohausz, Jakob Kappenberger, Heiner Stuckenschmidt
What fairness metrics can really tell you: A case study in the educational domain

Pp. 792-799 in: LAK'24: Proceedings of the 14th Learning Analytics and Knowledge Conference. 2024. Kyoto: Association for Computing Machinery

Recently, discussions on fairness and algorithmic bias have gained prominence in the learning analytics and educational data mining communities. To quantify algorithmic bias, researchers and practitioners often use popular fairness metrics, e.g., demographic parity, without discussing their choices. This can be considered problematic, as the choices should strongly depend on the underlying data generation mechanism, the potential application, and normative beliefs. Likewise, whether and how one should deal with the indicated bias depends on these aspects. This paper presents and discusses several theoretical cases to highlight precisely this. By providing a set of examples, we hope to facilitate a practice where researchers discuss potential fairness concerns by default.