How Deepfakes Could Break Underfunded & Overworked Public Defense

How Deepfakes Could Break Underfunded & Overworked Public Defense

I have sat in courtrooms and watched fabricated media nearly send innocent people to prison. I’ve been brought in to examine digital evidence that looked completely authentic on its face: text messages, screenshots, audio recordings, communication records. The timestamps checked out. The formatting was consistent. The content told a damning story. In every one of those cases, had I not been there to prove the evidence was fabricated, it could have gone into the record unchallenged. People's lives could have been destroyed by lies dressed up as data.

I was in the room because someone could afford to hire me, or because a court appointed me as an expert. In the vast majority of criminal cases in this country, that doesn't happen. And the gap between cases where a forensic expert is in the room and cases where one is not is about to become the gap between a fair trial and a wrongful conviction.

Deepfake Evidence Has Already Reached U.S. Courtrooms

Synthetic media is no longer a future concern. In the second quarter of 2025, Resemble AI tracked 487 verified deepfake incidents. The following quarter, that figure jumped to 2,031. Research shows most people cannot reliably catch high-quality fakes, and voice cloning has crossed what one researcher told Fortune is the "indistinguishable threshold."

Courts are feeling it. As the Berkeley Technology Law Journal cataloged in 2025, judges have fielded deepfake objections in cases ranging from Huang v. Tesla to multiple January 6 prosecutions. In US v. Khalilian, the defense moved to exclude a voice recording on the grounds that it could have been deepfaked. In a California housing dispute, NBC News reported that a judge identified an entire video exhibit as AI-generated, complete with a synthetic witness whose face was distorted and whose voice was monotone. The federal judiciary takes this seriously enough that the Advisory Committee on Evidence Rules has been considering amendments to Federal Rule 901 and has proposed a new Rule 707 to govern AI and machine-generated evidence.

What gets less attention is that the deepfake defense doesn't just work as a Hail Mary for defendants. It can be turned against any digital evidence, including evidence submitted by the defense. When a prosecutor challenges the authenticity of a defendant's cell phone video, a witness's photograph, or a recording that could prove innocence, the question becomes brutally simple: who can afford to prove it's real?

Enterprise Evidence vs. Consumer Grade

On the prosecution's side, evidence increasingly flows from enterprise-grade systems built for authentication. Body-worn cameras from companies like Axon store footage in cloud platforms with tamper-evident audit trails, cryptographic file fingerprints, and automated chain-of-custody documentation. When a prosecutor introduces body camera footage, the system itself provides the provenance. The cost of proving that evidence is real was paid when the department bought the camera system.

A defendant's exculpatory evidence rarely has that infrastructure. The cell phone video placing them somewhere else, the photo contradicting a witness, originates on a consumer device that was never designed to function as an evidence management system. The forensic problems start the moment the file leaves the phone. Research published in Perspectives in Legal and Forensic Sciences found that default image-based transfers through platforms like WhatsApp, Instagram, Facebook Messenger, and Snapchat strip critical metadata and re-encode the file, altering its digital fingerprint. By the time the lawyer touches the file, the data that could authenticate it may already be gone.

Restoring that authentication chain requires going back to the source phone and performing device-level forensics. And that is where the cost problem turns constitutional.

Public Defenders Can't Always Afford Digital Forensics Experts When They Need Them

Device-level forensic examinations are not cheap. Common mobile device examinations cost in the low thousands of dollars, with complex cases potentially going substantially higher. Add expert witness time for depositions and trial testimony, and the cost of authenticating a single piece of digital evidence can climb into five figures. Legal commentators have begun flagging this as an emerging access-to-justice problem: wealthy litigants can afford comprehensive forensic analysis while individuals and indigent defendants cannot.

Now look at who is paying. The American Bar Association reported in 2024 that federal courts requested $1.69 billion for defender services in fiscal year 2025 and got substantially less, leaving the system roughly $190 million short. By July 2025, the program paying court-appointed private attorneys ran out of money entirely, leaving lawyers defending indigent clients working without pay. A Vera Institute analysis found state-level indigent defense systems chronically under-resourced for decades, and Governing documented public defenders carrying more than 500 misdemeanor cases per year in some jurisdictions.

These offices cannot keep enough attorneys on staff. They are rarely utilizing digital forensics examiners. They are not bringing in expert witnesses to explain hash values and metadata integrity to a jury. The money isn't there. It never has been.

The Stakes Of Deepfake Evidence Are Constitutional

Yes, the government’s enterprise systems fail sometimes. Body camera footage disappears. Officers switch cameras off at bad moments. But those are failures of compliance, not technology. When the government’s evidence goes missing, it’s usually because somebody didn't follow the procedure. When the defense's evidence cannot be authenticated, there was no procedure to follow.

One side has a compliance problem. The other has a structural one.

I know how to prove evidence is fabricated. I’ve done it, and I've testified to it. But for every case where a forensic expert was in the room, there are cases where no one was. Where a fabricated screenshot sailed in unchallenged. Where a real cell phone video was tossed because nobody could afford to prove it was real.

The Sixth Amendment promises a fair defense. A fair defense in the age of synthetic media requires the ability to prove that truth is truth. Right now, we are building a system where that ability depends in large part on which side of the courtroom you sit on. 

Read more