On an April day, Trevis Williams was stopped by subway police in Brooklyn and taken into custody. He didn’t know what was going on.
Two days later, he was still sitting in jail. The charge? Exposing himself to a woman in a Manhattan building—about 19 kilometers away from where he actually was. Williams is 1.88 meters tall and weighs around 104 kilograms. The suspect described by the victim was about 1.68 meters tall and roughly 73 kilograms.
The thing that connected them was an AI facial recognition match based on grainy CCTV video.

A Flawed Match
The NYPD has been using facial recognition technology since 2011. Between 2007 and 2020, it spent more than $2.8 billion on surveillance tools—including stingray phone trackers, crime prediction software, and X-ray vans. The department now runs countless of facial recognition searches every year.
The technology’s use in the Williams case followed a now-familiar pattern. Investigators fed a blurry still from grainy CCTV footage into the department’s system. An algorithm transformed the face into a series of data points and returned six possible matches. All of them were Black men with dreadlocks and facial hair.
Williams had been arrested a few months earlier on an unrelated misdemeanor charge, so his mug shot still lingered in the system. An examiner chose his photo as a “possible match.” A report even warned: “not probable cause to arrest.”
Still, detectives used the photo in a lineup. The victim pointed to him. “Confident it is him,” a detective wrote.
That was enough for police to make the arrest. They didn’t check his phone records, verify his alibi, or contact his employer.
When shown the surveillance still, Williams pleaded: “That’s not me, man. I swear to God, that’s not me.” A detective replied: “Of course you’re going to say that wasn’t you.”
Surveillance Meets Eyewitness Memory

The woman who made the initial complaint told police she had seen the man before. The perpetrator was a delivery worker who lingered in the hallway of her building on East 17th Street in Manhattan. On February 10, she said, he appeared in a hallway mirror, genitals exposed. She screamed. He fled.
But at that moment, Williams was in Marine Park, Brooklyn. Cell phone tower records confirmed it. He had been driving home from his job in Connecticut, where he worked with autistic adults.
It didn’t matter.
He was jailed for more than two days. Prosecutors finally dropped the charges in July, but the damage was done.
“In the blink of an eye, your whole life could change,” Williams said.
Oops, AI Did It Again…
Trevis Williams is not alone.
Across the country, at least 10 people have been wrongly arrested due to facial recognition, according to media reports. Most of them, like Williams, were people of color.
In Detroit, three Black men were wrongly arrested using facial recognition. In one 2022 case, a man was held for over a month before proving he wasn’t at the scene after being falsely identified. The man faced attempted murder charges.
Civil rights groups have issued sharp warnings. “We’ve seen this over and over across the country,” said Nathan Wessler of the ACLU, as per New York Times. “One of the primary dangers of this technology is that it often gets it wrong.”
A 2023 study from the National Institute of Standards and Technology (NIST) found that facial recognition systems could match mugshots with 99.9% accuracy—but only as long as the photos were clear and controlled. But when the images were blurry, dimly lit, or taken at an angle, as is often the case in real life, the error rate climbed.
“It may drop significantly when low-quality or uncontrolled images are used,” said Michael King, a federal advisor who studied the report.
No Guardrails in Place
In some cities, safeguards are built into the process. In Detroit and Indiana, for example, police cannot include a facial recognition match in a photo lineup unless there’s supporting evidence like fingerprints or DNA.
The NYPD has no such rule.
It also doesn’t track how often the tool leads to mistaken arrests. While officials say the technology is only one part of an investigation, critics say that’s misleading.
“Even if there is a possible match, the NYPD cannot and will never make an arrest solely using facial recognition technology,” NYPD spokesperson Brad Weekes told ABC7.
But Williams’s lawyer, Diane Akerman, disputes that: “Traditional police work could have solved this case or at least saved Mr. Williams from going through this.”
The Legal Aid Society, which represented Williams, has asked the city’s Department of Investigation to look into the NYPD’s practices. In a letter, it warned that “the cases we have identified are only the tip of the iceberg.”
The group also accused NYPD’s Intelligence Division of bypassing policy by enlisting other agencies, like the Fire Department (FDNY), to run facial recognition scans that the NYPD itself is barred from doing.
In one case, the FDNY used Clearview AI software, which has been long criticized for its secrecy and lack of oversight, to identify a protester, leading to a now-dismissed charge. STOP, the Surveillance Technology Oversight Project, calls these workarounds “deeply alarming.”
“Everyone, including the NYPD, knows that facial recognition technology is unreliable,” said Akerman. “Yet the NYPD disregards even its own protocols.”
A Future in Limbo
Williams had been preparing to become a correctional officer at Rikers Island. But after the arrest, the hiring process stalled.
“I was so angry…” he told ABC7. “I hope people don’t have to sit in jail or prison for things that they didn’t do.”
He still worries that the arrest will follow him. “Sometimes, I just feel like I’m having panic attacks,” he said.
The public lewdness case has since been closed. No one else has been charged.
Facial recognition technology is often sold as a boon to law enforcement — a tool to unmask criminals hiding in plain sight. But when used recklessly, it just creates new victims.
Williams’s story shows what happens when a powerful algorithm meets a fallible eyewitness without the basic guardrails of good policing.