Is Artificial Intelligence Responsible for Incarcerating Innocent People?

April 9, 2024 Criminal Defense

The use of artificial intelligence is rapidly increasing in various fields across the globe. One area that is starting to become affected by artificial intelligence is the field of criminal justice. While artificial intelligence has the potential to help law enforcement and other legal jurisdictions with issues such as facial recognition and crime forecasting, it is still relatively new and therefore can result in its own procedural errors.

This page provides insight on artificial intelligence, how it’s currently being applied to criminal cases, and several examples of people being wrongfully arrested due to issues related to artificial intelligence facial recognition.

What is Artificial Intelligence?

Artificial Intelligence (AI) was defined by its creator, John McCarthy, as “the science and engineering of making intelligent machines.”

According to the Office of Justice Programs (OJP), AI is referred to as the ability for a machine to both perceive and then respond to its environment. This includes conducting tasks typically required by human intelligence and decision-making processes, but without any direct human intervention.

How is AI Being Applied to Criminal Cases?

Just as humans are efficient in recognizing patterns and differentiating information based on experience, so does AI. The methods of machine learning with AI are starting to be used within the field of criminal justice. One of the main human capabilities set out to replicate through AI is pattern recognition, but in the form of software algorithms and computer hardware.

The following provides AI applications being used for criminal justice and public safety:

  • Facial Recognition – Facial images are relied on to help establish a potential suspect’s identity and whereabouts. AI helps intelligence analysts by examining the large volume of images and videos in a timely manner.
  • DNA Analysis – Researchers are trying to use AI to address the current challenges to DNA analysis and the substantial amounts of complex data obtained in electronic format. Researchers have begun to combine human analysis with AI algorithms and data mining.
  • Gunshot detection – Scientists have begun developing AI algorithms and methods to detect gunshots, determine shot-to-shot timings, determine the number of firearms present in an incident, assign specific shots to firearms, differentiate between muzzle blasts and shock waves, and estimate the probability of class and caliber.
  • Public safety and image analysis – Law enforcement and other jurisdictions in the criminal justice system use video and image analysis to obtain information regarding people, objects, and actions to support as evidence in a criminal investigation. Due to the large amount of information and constant change in technologies, reviewing and analyzing this content in a timely manner is time consuming and can lead to human error.
  • Crime forecasting – Crime forecasting is a method used mainly by law enforcement to predict potential crimes before they occur. AI is being suggested for legal use to analyze information and suggest rulings, identify criminal enterprises, and potentially predict individuals at risk from criminal enterprises.

The OJP suggests combining policing analytics with AI to better respond to incidents of suspected crime, prevent criminal threats, divert resources, and investigate criminal activity. Their study concludes by stating, “AI has the potential to be a permanent part of our criminal justice ecosystem, providing investigative assistance and allowing criminal justice professionals to better maintain public safety.”

However, AI has already begun to receive criticism for its use in criminal justice potentially leading to misidentification.

The Innocence Project Warns of AI Leading to Misidentification

A 2024 report by the Innocence Project identified at least seven confirmed cases of misidentification caused by AI facial recognition.

The first documented case of facial recognition technology (FRT) ending in a wrongful arrest took place in January 2020. Robert Williams’ wife received a call that her husband needed to turn himself into the police. She thought it was merely a prank call, until Williams arrived home to a police officer that arrested and detained him for 30 hours. Police alleged that Williams stole thousands of dollars’ worth of Shinola watches based on FRT that matched a photo of Williams’ expired license.

Other individuals highlighted in the Innocence Project’s study include Nijeer Parks, Porcha Woodruff, Michael Oliver, Randall Reid, and Alonzo Sawyer. Including Williams, these six individuals were all arrested due to misidentification from FRT.

Part of the concern over AI technology such as FRT stems from the clear racial inequalities in both the criminal justice system and in terms of policing. There is research arguing that FRT has been less reliable for Black and Asian people, as the algorithms have struggled to differentiate facial features of specific skin tones.

The Innocence Project further outlined that AI’s adoption in the field of criminal law may end up echoing previous misapplications of technology and forensic science. Former examples of misapplications include hair comparisons, bite mark analysis, and arson investigations that have all led to wrongful convictions.

The following is a statement provided by the Innocence Project’s Chris Fabricant:

“The technology that was just supposed to be for investigation is now being proffered at trial as direct evidence of guilt. Often without ever having been subjected to any kind of scrutiny… Corporations are making claims about the abilities of these techniques that are only supported by self-funded literature. Politicians and law enforcement that spend [a lot of money] acquiring them, then are encouraged to tout their efficacy and the use of this technology.”

In 2023, the Biden Administration issued an executive order for developing standards relating to the potential risks of AI. Included in the executive order was the standard to develop “tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.” Despite this statement coming from White House administrators, there have yet to be any federal policies set in place to regulate AI for policing.

How Can a Defense Attorney Help in Cases Involving AI?

Being accused of a criminal offense using artificial intelligence is still new, which can be confusing to a person not familiar with the technology or how it’s used by police. It is extremely beneficial to connect with a defense attorney who is knowledgeable regarding AI technology.

The following lists ways a defense attorney can help defend you in a case involving AI evidence:

  • Understand and analyze AI evidence obtained in a criminal investigation, potentially recommending the use of an expert witness who is an expert witness in AI, data science, or computer forensics;
  • Challenge whether the evidence obtained through AI is considered admissible in court, or if it’s considered unreliable or was obtained in violation of the defendant’s rights;
  • Cross-checking the AI evidence by hiring an expert to review the relevant information; and
  • Investigate if there was any bias in the AI methodology.

The defense team with Pumphrey Law can help defend your criminal case. Contact our office today to schedule your free consultation.

Contact Pumphrey Law Firm

If you or someone you love is being prosecuted for an alleged crime due to artificial intelligence, you should consider hiring a defense attorney to represent your case. Depending on the charge you’ve been accused of a conviction can end with life-altering results. You could be required to pay fines and face imprisonment. To protect yourself and your future, consider entrusting your case with the experienced lawyers at Pumphrey Law Firm. We will do everything we can to have your case result in an optimal outcome.

Call our office today to schedule a free consultation at (850) 681-7777 or fill out our online form.


Back to Top