Skip to main content

Ethical ML: mind the assumptions

Isabel Valera ( Saarland University, Max Planck Institute for Software Systems )
As automated data analysis supplements and even replaces human supervision in consequential decision-making (e.g., pretrial bail and loan approval), there are growing concerns from civil organizations, governments, and researchers about potential unfairness and lack of transparency of these algorithmic systems. To address these concerns, the emerging field of ethical machine learning has focused on proposing definitions and mechanisms to ensure the fairness and explicability of the outcomes of these systems. However, as we will discuss in this work, existing solutions are still far from being perfect and encounter significant technical challenges. Specifically, I will show that, in order for ethical ML, it is essential to have a holistic view of the system, starting from the data collection process before training, all the way to the deployment of the system in the real-world. Wrong technical assumptions may indeed come at a high social cost.
As an example, I will first  focus on my recent work on  both fair algorithmic decision-making, and algorithmic recourse. In particular, I will show that algorithms may indeed amply the existing unfairness level in the data, if their assumptions do not hold in practice. Then, I will focus on algorithmic recourse, which aims to guide individuals affected by an algorithmic decision system on how to achieve the desired outcome. In this context, I will discuss the inherent limitations of counterfactual explanations, and argue for a shift of paradigm from recourse via nearest counterfactual explanations to recourse through interventions, which directly accounts for the underlying causal structure in the data. Finally, we will then discuss how to achieve recourse in practice when only limited causal information is available.
(Registration closes 2 hours before the beginning of the seminar).

 

 

Share this: