Predictive Policing and Criminal Prediction Systems
Predictive policing, once the subject of science fiction, is now a reality. These systems analyze data to forecast criminal activity or identify individuals likely to commit crimes. However, evidence shows these practices perpetuate discrimination and infringe on fundamental rights.
For example:
In the Netherlands, the "Top 600" list profiles young individuals, many of Moroccan descent, as potential offenders. Many report being harassed by authorities as a result.
In Italy, the "Delia" system includes ethnicity data to profile individuals for future criminality.
These systems disproportionately target racially diverse and economically disadvantaged communities, reinforcing structural inequalities.
Despite these issues, predictive systems heavily influence real-life outcomes, including surveillance, questioning, fines, arrests, and even decisions related to prosecution and sentencing.
Problems with AI in Criminal Justice
AI and ADM systems used in criminal justice pose several critical challenges:
Discrimination and Bias
These systems often rely on biased data, reflecting societal and institutional prejudices. This perpetuates inequalities, particularly against racial and ethnic minorities, exacerbating discrimination in policing and justice systems.
Presumption of Innocence
Profiling individuals based on predictive analytics undermines the fundamental legal principle of presumed innocence until proven guilty.
Transparency and Accountability
The lack of clarity around how these systems operate prevents scrutiny and makes it difficult to challenge decisions or seek redress. Often, individuals are unaware they were subjected to automated decision-making.
What Should Be Done?
Governments and institutions must implement robust regulations to ensure these technologies do not harm individuals or violate their rights. Key recommendations include:
Outright Ban on Predictive Systems
Prohibit the use of AI systems for profiling, predictive policing, and risk assessments.
Bias Testing
Mandatory independent bias testing during all phases of system design and deployment to ensure fairness.
Transparency and Evidence
Make system operations and decision-making processes clear to those affected. Decision-makers must provide evidence for decisions influenced by AI systems.
Accountability Mechanisms
Notify individuals whenever an AI or ADM system impacts a criminal justice decision. Establish procedures for challenging decisions and seeking redress.