Deepfakes are just one of the challenges AI has helped introduce in the new industries it’s revolutionized. These are fake videos, images or audio clips that look hyper realistic and are created using advanced AI tools that are blurring the line between real and fake.
Given the increasing concerns about misinformation, harassment, and fraud, a common question to ask is: Are deepfakes illegal? The answer isn’t as simple as a “yes” or “no.” In this post, let’s take a look at what deepfakes are, when they violate the law, and how U.S. law is catching up to this quickly developing technology.
What Are Deepfakes?
Deepfakes are synthetic media in which a person in an image or video is replaced with someone else as their likeness. Created by AI, these are astonishingly realistic and can be hard for the average viewer to tell are fake.
Deepfakes used to be mostly for harmless entertainment or parody, but now they’re in more dangerous territory, with ethical and legal implications. Research by Sensity AI found the number of deepfake videos online doubled every six months between 2018 and 2023, and the US has been a top target for political and social manipulation via this technology, Sensity AI said.
When Are Deepfakes Illegal?
No one federal law in the U.S. makes deepfakes illegal across the board. Rather, it’s the use of a deepfake that will determine whether or not it’s illegal — and whether or not it breaks existing laws.
1. Nonconsensual Deepfake Pornography
The most common and harmful use of deepfakes is to create fake pornography without the consent of a person. This kind of content generally targets celebrities, famous people, or even regular people.
In the U.S., many states have already passed or are considering passing laws that would criminalize nonconsensual deepfake pornography. For instance, Virginia, California, and Texas have laws that specifically prohibit sharing fake explicit images or videos without one’s consent.
2. Defamation and Harassment
Using a deepfake to harm someone’s reputation or cause emotional distress could be a case of defamation, harassment, or cyberbullying. Even if no criminal law mentions deepfakes, people who are the target of damaging deepfakes in the U.S. may have the right to sue under civil laws.
3. Election Interference and Misinformation
Political campaigns or elections are particularly damaging for deep fakes. Some states, such as Texas and California, have passed laws prohibiting the distribution of deceptive deepfakes within 30 to 60 days of an election.
A 2024 report from Pew Research says that over 67% of Americans are worried that AI deepfakes will be used to mislead voters in the upcoming elections, leading to more regulatory action.
4. Fraud and Financial Crimes
Deepfakes can be used to commit fraud, like impersonating a CEO in a video call or trick someone into transferring money. In these cases, the use of deepfakes in such a way is illegal, and existing fraud and identity theft laws apply.
Do Deepfakes Violate Federal Laws?
The U.S. has no sweeping deepfake ban in place at the federal level. But a few bills have been brought to Congress, such as:
DEEPFAKES Accountability Act
This bill would ensure that creators of synthetic media will have to add clear labels or watermarks to help viewers know what they are looking at is AI generated content.
The proposed law looks to create standards to detect deepfakes, particularly for national security purposes.None of these have become federal law yet, but they indicate a growing fear in Washington about the problems that deepfakes could pose.
The Challenges of Regulating Deepfakes
It’s not easy to regulate deepfakes. The legal challenge of balancing freedom of speech, artistic expression, and protection from harm is a complex one in the U.S. Not all deepfakes are harmless or even useful, such as those found in movies, entertainment, or educational purposes.
The main legal question is whether the deepfake itself harms someone — reputationally, emotionally or financially — or interferes with the public trust, particularly in politics.
What Are Companies and Platforms Doing?
As lawmakers work towards better regulations, social media platforms and tech companies are moving to control the spread of harmful deepfakes.
For example, some of the social media platforms have introduced policies to label or remove manipulated media that could mislead users from Facebook, X (formerly Twitter), and YouTube. Deepfake detection tools are also being invested in by major U.S. companies to help combat the issue.
If you are a victim of a deepfake, what should you do?
If you think you have been the subject of a harmful deepfake:
If you want the takedown, contact the platform.
If the deepfake contains threats, extortion, or nonconsensual explicit content, report to law enforcement.
You can consult a lawyer to see if she can help you file a civil action, such as a defamation lawsuit.As soon as possible, document evidence (screenshots, URLs, etc.) of the problem.With the advancement of the technology, the tools and legal remedies will evolve to protect victims.
Conclusion
So, are deepfakes illegal? It all depends on how they are used. Making a deepfake itself isn’t a crime in the U.S., but using it to hurt, deceive or defraud people is.
Public concern and legislation is catching up, and the U.S. is moving toward stronger protections to prevent harmful uses of deepfakes.
With the increased accessibility of deepfake technology, it’s important for individuals, businesses, and lawmakers alike to understand what the risks are, and the legal realities.