Abstract

AI-based tools for predicting the risk of recidivism raise numerous challenges for the grounds of criminal law. There is an urgent need for introspection in an emerging discipline, "algorithmic fairness", which aims to build ethical tools, adapted to the concept of "justice". The aim is to provide methodological clarity in a field where disparate disciplines (data science, mathematics and law) converge, trying to answer the following questions: is it possible to translate concepts such as "fairness" or "non-discrimination" into mathematical language?; are there various concepts of "fairness"?; are they compatible?; what results do they yield?; is it possible to bridge the gap between the two languages to provide objectively fairer results?; how should the development of rights such as equality and non-discrimination affect programming? Such an examination will allow us to provide the necessary protection to groups that are at risk of being increasingly marginalised by the system.