Conference Paper 2022 ● Open Access

The Bias of Artificial Intelligence
within the Justice Field

Author Ludovico Papalia
Published 28 November 2022
Venue AIxHMI 2022 — CEUR-WS Vol. 3368
Language English
Download PDF View on CEUR-WS ↗ Cite this paper

Algorithmic systems are increasingly deployed in judicial and law enforcement contexts — from recidivism prediction to facial recognition in police investigations. This paper analyses how bias originates in machine learning models used within the justice system, tracing its sources to training data, feature selection, and model design choices. It examines the legal and ethical consequences of biased AI under EU and international frameworks, arguing that current regulatory approaches, including the EU AI Act, remain structurally insufficient to address the feedback loops that entrench historical discrimination. The paper proposes a set of transparency and auditability requirements that could be operationalised within existing legal instruments.

Artificial Intelligence Algorithmic Bias Justice System Criminal Justice Legal Informatics EU AI Act Fairness Transparency Recidivism Prediction
Papalia, L. (2022). The bias of artificial intelligence within the justice field. In Proceedings of AIxHMI 2022: Artificial Intelligence for Human Machine Interaction. CEUR Workshop Proceedings, Vol. 3368. https://ceur-ws.org/Vol-3368/