Angela Lipps spent nearly six months in prison due to an AI facial recognition error
50-year-old Tennessee resident Angela Lipps endured a true nightmare: she was arrested and held in jail for nearly six months on bank fraud charges in North Dakota, a state she had never visited.
This all happened because of an error in the AI-powered facial recognition system used by Fargo police. The Guardian reports in detail.
How the AI facial recognition error: a timeline of events
In April and May 2025, a series of bank frauds occurred in the Fargo, North Dakota, area: an unknown woman used a fake US Army ID to withdraw tens of thousands of dollars.
Police reviewed surveillance footage and used AI-powered facial recognition software. The system returned a match for Angela Lipps based on her facial features, build, and hairstyle.
The detective further checked her social media photos and driver’s license and decided there was a significant match. On July 14, 2025, a squad of U.S. Marshals raided Lipps’s Tennessee home with guns drawn while she was babysitting her four grandchildren.
She was arrested as a fugitive from North Dakota. She spent approximately four months in a Tennessee prison without bail and was then transported to North Dakota, where she remained in custody for a further period—almost six months in total.
Angela Lipps categorically denies any involvement: “I’ve never been to North Dakota; I don’t know anyone there. It was truly terrifying—I can still picture them breaking into my home with guns.”
After evidence, including her alibi and geolocation, completely confirmed her innocence, the charges were dropped. However, the damage was enormous: job loss, trauma for her children and grandchildren, financial hardship, and emotional exhaustion. Angela is currently trying to “rebuild her life,” as she puts it in an interview. Her lawyer emphasizes the police’s overreliance on AI without sufficient human verification—this was the key reason for the error.
Lipps’s case is not isolated. AI facial recognition errors have already led to false arrests in the United States. Research shows that such systems are more likely to misidentify women, people with dark skin, and minorities—accuracy drops to 80-90% in real-world settings.
This incident raises serious questions about the regulation of artificial intelligence in policing. Experts from the ACLU and the Electronic Frontier Foundation have long warned that without strict rules and mandatory verification by a human expert, technology can ruin the lives of innocent people.
Such cases have become yet another example of why the use of AI in criminal justice needs to be monitored. Such stories remind us how technologies intended to aid justice can become a source of injustice.
