
With widespread access to artificial intelligence, experts have begun to raise the alarm about deepfakes: images, videos, or audio that have been generated or edited by artificial intelligence. While some of the most widely circulated deepfakes have been video, emails, text messages, and even Slack and Teams messages can be deepfaked too.
Generating deepfakes previously required specialized knowledge and equipment. Now, anyone can create deepfakes—often for free, with little to no technical knowledge involved. Last year, the bipartisan NO FAKES Act was introduced in Congress, which seeks to protect individuals’ voice and visual likeness from replication, modification, or recreation via generative AI. The bill provides a national standard such that creators’ likenesses cannot be used without their consent. However, the bill has yet to pass.
While a lot of ink has been spilled on the impact of deepfakes on democracy, it is important to understand the potential ramifications for litigation too.
Undermining Evidence Integrity
A court’s ability to authenticate evidence is a cornerstone of legal procedure. Deepfakes undermine evidence integrity in two key ways.
First, deepfakes are convincing. Most people are unable to tell the difference between a deepfake and a real video, according to research by technology company iProov. Improvements in the technology will only further exacerbate the problem.
Second, as deepfakes enter the mainstream, people are less likely to trust the validity of any evidence. When false evidence propagates, even legitimate evidence becomes suspect. Moreover, there is evidence that the rise of deepfakes—and fake media across the internet more broadly—is eroding people’s ability to distinguish between real and false information. As media literacy decreases, the bar for proving that a piece of evidence is real will continue to rise.
Current legal frameworks are not necessarily equipped to handle deepfakes directly. However, it is possible to mitigate harm by employing a sufficient standard of authenticity under Federal Rule of Evidence 901.
Mitigating Harm
The Federal Rules of Evidence are designed to “administer every proceeding fairly, eliminate unjustifiable expense and delay, and promote the development of evidence law, to the end of ascertaining the truth and securing a just determination." The principle underlying the rules is that the proponent of evidence is responsible for establishing the relevance and authenticity of that evidence.
The problem, of course, is that there is little legal precedent for applying the Federal Rules of Evidence to deepfakes.
In The Threat of Deepfakes in Litigation, an article published in the Vanderbilt Journal of Entertainment & Technology Law, Agnieszka McPeak recommends looking at legal precedent surrounding the authentication of electronically stored information (ESI) and social media posts to understand how to handle deepfakes. Specifically, she cites two cases: Lorraine v. Markel American Insurance Co and Griffin v. State.
In Lorraine v. Markel American Insurance Co., the U.S. District Court for the District of Maryland established that ESI must survive the rules prohibiting hearsay, the original writing rule, and Rule 403, which balances probative value against the danger of unfair prejudice. In other words, the proponent of the evidence must show that the item is what the proponent claims it is. And in Griffin v. State, the state attempted to authenticate a MySpace comment through the testimony of the investigator who found the comment online. However, the state failed to get the testimony of the post’s author or witnesses and was unable to authenticate the evidence.
The U.S. Courts Advisory Committee on the Federal Rules of Evidence offered a middle-ground approach in their proposed amendments to address AI-generated evidence. (See the Committee’s November 2024 Agenda Book, the proposed amendments are on pages 269-71.) The Committee put forth the following addition to Rule 901:
If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that a jury reasonably could find that the evidence has been altered or fabricated, in whole or in part, by artificial intelligence [by an automated system], the evidence is admissible only if the proponent demonstrates to the court that it is more likely than not authentic.
Under this amendment, potential deepfakes can be challenged if the opponent can show enough information that a reasonable person would believe the evidence has been altered by AI. The burden then shifts to the proponent to prove the evidence is “more likely than not genuine.” The Committee noted that the preponderance standard is justified in this scenario because “any member of the public has the capacity to make a deepfake, with little effort and expense, and deepfakes have become more difficult to detect.”
Going forward, courts might also consider incorporating training for judges and jurors to help them recognize and evaluate deepfakes. AI literacy is vital for combating the spread of misinformation—both in the legal world and in the public sphere more broadly.