Algorithmic systems are increasingly deployed in judicial and law enforcement contexts — from recidivism prediction to facial recognition in police investigations. This paper analyses how bias originates in machine learning models used within the justice system, tracing its sources to training data, feature selection, and model design choices. It examines the legal and ethical consequences of biased AI under EU and international frameworks, arguing that current regulatory approaches, including the EU AI Act, remain structurally insufficient to address the feedback loops that entrench historical discrimination. The paper proposes a set of transparency and auditability requirements that could be operationalised within existing legal instruments.
Keywords
Artificial IntelligenceAlgorithmic BiasJustice SystemCriminal JusticeLegal InformaticsEU AI ActFairnessTransparencyRecidivism Prediction
Cite this paper
Papalia, L. (2022). The bias of artificial intelligence within the justice field. In Proceedings of AIxHMI 2022: Artificial Intelligence for Human Machine Interaction. CEUR Workshop Proceedings, Vol. 3368. https://ceur-ws.org/Vol-3368/
@inproceedings{papalia2022bias,
title = {The Bias of Artificial Intelligence within the Justice Field},
author = {Papalia, Ludovico},
booktitle = {Proceedings of AIxHMI 2022: Artificial Intelligence
for Human Machine Interaction},
series = {CEUR Workshop Proceedings},
volume = {3368},
year = {2022},
url = {https://ceur-ws.org/Vol-3368/}
}
Papalia, Ludovico. "The Bias of Artificial Intelligence within the Justice Field." AIxHMI 2022: Artificial Intelligence for Human Machine Interaction, CEUR Workshop Proceedings, vol. 3368, 2022, https://ceur-ws.org/Vol-3368/.