Deepfakes: The Perfect Evidentiary Storm?
Manuel A. Quilichini

“Seeing is believing.” “The eyes don’t lie”; “A picture is worth a thousand words”; “I’ll believe it when I see it”. There is no doubt that in our society, audiovisual information has considerably more credibility than any other type of information. We instinctively trusted what we could see or hear. Those days have come to an end with the technology known as “deepfakes”.
Thanks to generative artificial intelligence, we’ve entered a new era of evidence fabrication. Deepfakes – realistic videos or audio recordings generated or manipulated by artificial intelligence – can depict apparently real people saying or doing things they never said or did. What makes it worse is that the software to create deepfakes is easily accessible to anyone with a computer without any special skill or large amounts of cash. Thus, the rapid proliferation of false images, sounds and videos, especially in social media.
No longer a hypothetical threat, Deepfakes are beginning to appear in litigation—sometimes as fabricated evidence, other times as the basis for a defense. According to a simple search in Lexis/Nexis, the word “deepfake” is mentioned in 18 federal cases and in 8 State cases. Several hundred articles, statutes and bills have focused on how deepfakes impact all aspects of our daily life. Deepfakes are on their way to becoming ubiquitous.
The consequences could be profound – faked confessions, forged surveillance footage, or discredited legitimate evidence, all posing serious risks to the truth-seeking mission of the courts. This reality is a call to raise awareness of the evidentiary challenges that deepfakes present, especially around authentication. Judges and attorneys must now approach audiovisual evidence with a healthy mix of skepticism, technological literacy, and procedural vigilance. The evidentiary perfect storm is gathering—and we need to be ready.
Why Deepfakes Matter in the Courtroom
Few types of evidence carry more persuasive weight than video or audio. A surveillance clip, a recorded confession, a dash cam, or even a voicemail can decisively shape how a jury or judge interprets a case. Audiovisual evidence often seems self-authenticating: it shows what it shows, and jurors tend to believe it without much hesitation.
That assumption is precisely what makes deepfakes so dangerous.
Deepfakes exploit our trust in our senses, by making things look and sound real. As technology improves, even digital forensic tools can struggle to distinguish fake from genuine. We’re rapidly approaching a point where manipulated media can pass casual—and sometimes even expert—scrutiny[1].
For the legal system, the stakes are high. Imagine a divorce proceeding in which a parent appears to be caught on video striking a child—but the video is a fabrication. Or a criminal case where a defendant claims that an authentic confession video was manipulated by adversaries. In both scenarios, the truth is obscured, and the court’s ability to reach a just outcome is undermined. Even police body cams are vulnerable to hacks and fabricated evidence[2].
There’s also a broader problem: as deepfakes become more common, jurors may begin to question the reliability of all audiovisual evidence. This is known as the “reverse CSI effect” (or “liar’s dividend”)—a scenario in which jurors, acutely aware of the potential for manipulation, become overly skeptical, even of legitimate recordings. When jurors stop believing their eyes and ears, it threatens the entire evidentiary process.
In short, deepfakes present a dual threat: they can be used to create false evidence, and they can cast doubt on authentic evidence. Either way, the result is the same – erosion of trust in a type of evidence that has historically been seen as incontrovertible.
The Legal Framework: Authentication Under Rule 901
Federal Rule of Evidence 901 sets the baseline for authenticating evidence. The concept is simple: before evidence can be admitted, a party must produce “evidence sufficient to support a finding that the item is what the proponent claims it is.” For audiovisual evidence, the most commonly used method is testimony from a witness with personal knowledge who affirms that the video or audio is a “fair and accurate portrayal” of the events depicted. But deepfakes complicate this way of presenting evidence.
.
First, the traditional “fair and accurate portrayal” standard assumes that the witness can meaningfully verify the content. In the deepfake era, even someone present during the events may not detect subtle but significant manipulations in a recording. Sophisticated fakes can swap faces, alter speech, or insert fabricated events in ways that are imperceptible to the human eye or ear – especially if the edits were made by advanced AI.
Second, there are two theories courts use to admit audiovisual evidence, and both are now under strain:
- The pictorial communication theory treats audiovisuals as illustrative -essentially a visual aid to a witness’s testimony.
- The silent witness theory treats the mechanical and usually automated recording as independent evidence that speaks for itself, so long as its reliability is established through chain of custody or other technical assurances.
Deepfakes test both theories. Under pictorial communication, a witness may mistakenly affirm a fake. Under the silent witness theory, a fabricated video with no obvious flaws could appear completely trustworthy unless the opposing party has the expertise and opportunity to challenge it.
The bottom line is that the traditional assumptions underlying Rule 901 may no longer hold. The ability to generate realistic, undetectable fake recordings raises serious questions about whether existing standards are enough – and whether jurors can still rely on what they see and hear in court.
The Emerging Debate: Competing Views on What Should Be Done
As courts begin to grapple with deepfakes, legal scholars and practitioners are split on how to respond. While everyone agrees that deepfakes pose real risks, there is far less agreement on whether existing evidentiary rules are equipped to handle them—or whether significant reform is needed. Debate continues over what the reform should entail and its impact on our judicial system.
View 1: The Current Framework Is Sufficient
Some experts, like Riana Pfefferkorn, argue that courts already have the tools they need[3]. Our evidentiary rules require authentication, which means that if a party wants to admit a video, it must still prove what it purports to show. Courts have a long history of dealing with forged evidence—altered documents, doctored photos, even staged crime scenes. From this perspective, deepfakes are just a new version of an old problem and no changes are required to deal with deepfakes.
Supporters of this view caution against overreacting. Raising the authentication bar too high might exclude valid, probative evidence, especially for parties with limited resources. Instead of rewriting the rules, they suggest doubling down on established methods: rigorous cross-examination, expert witnesses, chain-of-custody documentation, and adversarial testing in the courtroom
.
View 2: The Rules Must Evolve
Others believe that the existing framework is outdated. Scholars like Rebecca Delfino[4] and John LaMonaca[5] have called for more rigorous standards and even amendments to the Federal Rules of Evidence.
One major proposal is to shift the responsibility for determining the authenticity of contested audiovisual evidence from the jury to the judge—treating it as a gatekeeping issue under Rule 104(a), akin to Daubert hearings for expert testimony. Proponents argue that jurors lack the technical expertise to discern a sophisticated deepfake and may be misled or confused.
Another recommendation is to raise the evidentiary threshold by requiring corroborating circumstantial evidence—such as metadata, forensic analysis, or independent witness confirmation—before admitting any standalone video or audio under the silent witness theory.
Middle-Ground Approaches
Some suggest a hybrid approach: courts could maintain the current structure but encourage more robust authentication protocols when audiovisual evidence is at issue[6]. For example:
- Judges could issue limiting instructions reminding jurors that videos can be manipulated.
- Parties might be encouraged (or required) to disclose potential use of AI-generated media in discovery.
- Courts could develop local rules or best practices around the use of forensic video authentication experts.
Judge Paul W. Grimm, Professor Maura R. Grossman, and Professor Gordon V. Cormacksuggest a structured approach when allegations of deepfake evidence arise[7]. They state that a mere assertion that evidence is a deepfake is insufficient to warrant exclusion or a pretrial hearing. Instead, there must be a credible, fact-based challenge to the authenticity of the evidence. Upon such a showing, it is recommended that judges conduct a pretrial evidentiary hearing under Rule 104(a) of the Federal Rules of Evidence. This hearing allows the proponent to demonstrate the reliability and authenticity of the evidence, potentially through expert testimony or forensic analysis. This approach ensures that the court maintains the integrity of the evidentiary process without impeding the admissibility of legitimate evidence.
What Should Trial Lawyers And Judges Do
Whether or not the Federal Rules evolve, lawyers and judges must start adapting now. Deepfakes are no longer theoretical—they are becoming tools in the evidentiary arsenal, and potential weapons in the hands of bad actors. Therefore, a few practical recommendations follow.
For Trial Lawyers: Be Proactive, Not Reactive
- Don’t assume the video will speak for itself. If your case relies on audiovisual evidence, be prepared to prove its authenticity, not just introduce it. This may require establishing chain of custody, calling witnesses with direct knowledge of the recording, or retaining forensic experts to validate the file.
- Conduct due diligence on your own evidence. Even if your client supplies what appears to be an authentic recording, verify its origin. You don’t want to walk into court relying on a deepfake—especially if your opponent is prepared to challenge it.
- When opposing a suspicious video, raise the issue early. File pretrial motions to exclude unauthenticated or questionable media, request discovery of original source files, or ask for a hearing under Rule 104(a) to challenge admissibility.
- Educate yourself and the court. Courts are only beginning to encounter the issue of deepfakes in the courts. We must educate ourselves on this technology so that we can argue more effectively for or against its admissibility. We also must be ready to educate the court on the nuances of deepfakes.
For Judges: Maintain a Skeptical but Balanced Gatekeeping Role
- Recognize that audiovisual evidence is no longer inherently trustworthy. A recording can be completely fabricated, so judges should be willing to probe the foundations of what may seem like self-evident proof.
- Use Rule 104(a) hearings where appropriate. When authenticity is in serious dispute, resolving the question before the evidence reaches the jury helps protect the integrity of the proceedings.
- Consider tailored jury instructions. Jurors may either be too trusting or too skeptical of video evidence. Judges can help by instructing them on the potential for manipulation—and their responsibility to weigh the evidence in light of the entire record.
- Stay informed. Courts will be on the front lines of this evidentiary shift, pushed by the rapid growth of artificial intelligence in all its forms. Staying current with developments in forensic media analysis and emerging AI detection tools is now part of the job.
A Call for Awareness and Action
Deepfakes are no longer science fiction; they’re a present and growing threat to the integrity of the judicial process. In an adversarial system that relies on the credibility of evidence, the ability to fabricate or challenge audiovisual material with powerful AI tools changes the game. We are entering an evidentiary environment where “what you see” may not be “what happened,” and where even legitimate evidence can be weaponized through doubt.
The legal system doesn’t need to panic, but it does need to prepare. Attorneys must learn to scrutinize videos and recordings with a forensic eye. Judges must evaluate authentication claims more critically. And both must be willing to question long-held assumptions about the trustworthiness of visual and auditory evidence. We must keep in mind that this is not just a technical issue, it’s a credibility issue, one that strikes at the heart of the truth-seeking function of our courts. We must respond with care, vigilance, and updated practices, so that we can meet the challenge head-on. The storm is coming—and for the sake of justice, we must be ready.
[1] Science & Tech Spotlight: Combating Deepfakes, GAO-24-107292. Published: Mar 11, 2024. Publicly Released: Mar 11, 2024. https://www.gao.gov/products/gao-24-107292 (last seen May 9, 2025)
[2] CISO Mag, Police body cams can be tampered with: Researcher, Published August 17, 2018. https://cisomag.com/police-body-cams-can-be-tampered-with-researcher/
[3] Riana Pfefferkorn, “Deepfakes” in the Courtroom, 29 B.U. Pub. Int. L.J. 245 (2020)
[4] Rebecca A. Delfino, Deepfakes on Trial: A Call To Expand the Trial Judge’s Gatekeeping Role To Protect Legal Proceedings from Technological Fakery, 74 Hastings L.J. 293 (2023)
[5] John P. LaMonaca, A Break from Reality: Modernizing Authentication Standards for Digital Video Evidence in the Era of Deepfakes, 69 Am. U. L. Rev. 1945 (2020)
[6] For example, see Agnieszka McPeak, The Threat of Deepfakes in Litigation: Raising the Authentication Bar to Combat Falsehood, 23 Vand. J. Ent. & Tech. L. 433 (2021)
[7] Paul W. Grimm, Maura R. Grossman & Gordon V. Cormack, Artificial Intelligence as Evidence, 19 Nw. J. Tech. & Intell. Prop. 9 (2021).