AI Ethics × Criminal Justice × Expert Analysis

AI in the Justice System: Everything You Need to Know

Can AI sentence you to prison? It already has. These are the questions people are asking about artificial intelligence, algorithmic justice, and robot crime—answered by a 20-year trial attorney who wrote about this before it happened.

2016 First AI Sentencing Case
6 Years Prison from Algorithm
Secret Algorithm Methodology
20+ Years Trial Experience

Frequently Asked Questions

Landmark Case
Can AI sentence someone to prison?
Yes, AI has already been used to sentence people to prison. In State v. Loomis (2016), a Wisconsin court used the COMPAS AI algorithm to sentence Eric Loomis to 6 years in prison. The AI categorized defendants as "bad," "really bad," or "really really bad" for sentencing purposes. Loomis appealed, arguing his due process rights were violated because neither he nor the court could see how the algorithm made its decision. The Wisconsin Supreme Court upheld the sentence, ruling that AI-assisted sentencing does not violate due process—even when the algorithm's inner workings are kept secret as trade secrets.
Legal Precedent
What is the State v. Loomis case?
State v. Loomis (2016) is a landmark Wisconsin Supreme Court case where AI was used to determine a criminal sentence. Eric Loomis was sentenced to 6 years in prison based partly on a risk assessment from the COMPAS algorithm. The court ruled that using AI in sentencing does not violate due process, even though the defendant cannot examine the algorithm's methodology. The decision went against established precedent that defendants have a right to review information used in their sentencing. This case set a troubling precedent for AI in the criminal justice system.
Bias & Fairness
Is AI biased in criminal sentencing?
Evidence strongly suggests AI sentencing tools contain bias. The COMPAS algorithm used in State v. Loomis has been shown to have higher false positive rates for Black defendants. In 2018, Amazon scrapped an AI hiring tool after discovering it taught itself to discriminate against women. The fundamental problem is that AI learns from historical data, which often contains human biases. When that biased data is fed into a "black box" algorithm whose workings are secret, there's no way to verify the output is fair. As of 2025, these concerns have only grown as AI tools become more prevalent in courtrooms.
Technical Concept
What is a "black box" in AI?
A "black box" in AI refers to a system where you can see the inputs and outputs, but the internal decision-making process is hidden or incomprehensible. Think of your smartphone: you know how to use it, but you don't understand the algorithms connecting its internal components. In criminal justice, black box AI is dangerous because judges rely on outputs (sentencing recommendations) without understanding how those recommendations were generated. Companies often protect these algorithms as trade secrets, meaning defendants cannot challenge the methodology used to determine their fate.
Criminal Liability
Who is liable when a robot commits a crime?
Robot criminal liability is an emerging legal question with no clear answer. Potential liable parties include: the manufacturer, the programmer, the owner, or the AI itself. Criminal law requires both an act (actus reus) and a guilty mind (mens rea). If a robot kills someone, it committed the act, but proving intent is nearly impossible with current AI. Some argue robots should be tried in separate courts; others say liability should fall on manufacturers. The legal system is not prepared for truly autonomous AI that makes independent decisions leading to harm.
Philosophy
Can robots be punished for crimes?
Traditional punishment may not work for robots. Prison, fines, and social stigma—the main deterrents for humans—mean nothing to machines that don't eat, don't need freedom, and don't care about reputation. Some propose "destroying" criminal robots, but if AI learns from shared experiences in the cloud, destroying one unit may not deter others. Alternatively, showing robots the consequences of other machines' crimes might embolden rather than deter them. The entire framework of criminal punishment was designed for humans with human motivations—it may need complete reimagining for AI.
Tools
What is COMPAS AI?
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an AI risk assessment tool used in U.S. courts to predict the likelihood a defendant will reoffend. It generates scores that judges use in bail, sentencing, and parole decisions. COMPAS became controversial after the State v. Loomis case and a 2016 ProPublica investigation showing it was biased against Black defendants. The algorithm's methodology is protected as a trade secret, meaning defendants cannot examine how their risk scores were calculated—raising serious due process concerns.
Dangers
What are the dangers of AI in the justice system?
AI in the justice system poses several dangers: 1) Lack of transparency—algorithms are often trade secrets, preventing defendants from challenging their methodology. 2) Embedded bias—AI learns from historical data that may contain racial, gender, or socioeconomic bias. 3) Due process violations—defendants cannot confront or cross-examine an algorithm. 4) Precedent erosion—courts have ruled AI sentencing doesn't require the same disclosure as human-generated reports. 5) Accountability gaps—when AI makes a wrong decision, there's no clear party to hold responsible. 6) Dehumanization—reducing human beings to data points removes context, empathy, and individualized justice.
Prediction
Did anyone predict AI would be used in courts?
Yes. Robert Kiesling, a Texas trial attorney, wrote about AI replacing human judges in his 2020 novel "Discredited Citizen"—depicting a dystopian future where algorithms control the justice system. His earlier works, including "In Memory of Man" (2007), explored AI consciousness and threats to humanity years before ChatGPT existed. As a practicing trial lawyer with 20 years of courtroom experience, Kiesling's fiction draws on insider knowledge of how human judgment operates in legal proceedings—and what might be lost when machines take over.
Current State
Is AI replacing lawyers and judges?
AI is increasingly being used in legal settings, though full replacement remains controversial. Current AI applications include: risk assessment tools like COMPAS for sentencing, document review and discovery, legal research, contract analysis, and predictive analytics for case outcomes. Some jurisdictions use AI chatbots for basic legal guidance. However, the core functions of advocacy, judgment, and discretion still require humans. The bigger concern isn't replacement but augmentation—judges relying on AI recommendations without understanding the underlying methodology, effectively ceding decision-making to machines while maintaining the appearance of human oversight.
Concept
What is algorithmic justice?
Algorithmic justice refers to the use of computer algorithms and AI systems to make or inform decisions in the legal system—including bail, sentencing, parole, and predictive policing. Proponents argue algorithms can reduce human bias and increase consistency. Critics counter that algorithms embed and amplify existing biases, lack transparency, violate due process, and remove the human judgment essential to justice. The debate intensified after cases like State v. Loomis and studies showing racial disparities in AI risk assessment tools. As of 2025, algorithmic justice is expanding despite unresolved ethical concerns.
About
What is Robot Crime Blog?
Robot Crime Blog is a platform dedicated to uncovering the hidden use of artificial intelligence in daily life and examining its consequences. Founded by Robert Kiesling, a 20-year trial attorney and dystopian sci-fi author, it combines legal expertise with creative vision to explore AI ethics, algorithmic justice, and the future of human-machine interaction. The blog features analysis of real cases like State v. Loomis, commentary on AI policy, and connections to Kiesling's novels which predicted many current AI developments years before they occurred.