- California has two new laws regulating deepfakes – videos or images manipulated with artificial intelligence to make it appear as if someone has said or done something that they haven’t.
- The first law makes it illegal to post deepfakes of political candidates in the 60 days ahead of an election. It was introduced after a Nancy Pelosi deepfake went viral.
- The second law allows state residents to sue anyone who uses a deepfake to place them in pornographic material without consent. A recent study found that more than 90% of deepfakes are pornographic and target women.
- But civil liberties and misinformation experts have criticized both laws, saying that they are misguided, vague, subjective, and threaten free speech.
Last week, California Gov. Gavin Newsom signed two deepfake bills into state law.
The first is political, making it illegal to post manipulated videos and pictures that give a “false impression of a political candidate’s actions or words” in the 60 days before an election.
The bill was introduced by Democratic Assemblyman Marc Berman after a deepfake of Nancy Pelosi went viral, in which her speech was altered in a video to make it sound like she was slurring her words.
“In the context of elections, the ability to attribute speech or conduct to a candidate that is false – that never happened – makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters,” Berman said in a statement.
The law will take effect next year and it includes exemptions for news outlets, satire and parody, and manipulated videos that have clear disclaimers.
But most deepfakes aren't political. Deeptrace, a cybersecurity company, released a study of almost 15,000 deepfakes, and found that more than 90% were pornographic. All of these deepfakes targeted women, a horrifying and prevalent new form of online harassment and revenge porn.
Accordingly, the second California deepfake law allows residents to sue anyone who uses deepfake technology to place them in pornographic material without consent. Both of these measures seem positive - but they may not have their intended effect, according to experts.
Experts say California's deepfake legislation is misguided, and threatens free speech
Claire Wardle is the executive director of First Draft, a nonprofit focused on addressing the online tactics that fuel misinformation and disinformation. As Wardle has seen deepfake worries increase, she isn't sure that our attention is in the right place.
"I have real concerns about new legislation that focuses on the technology or techniques used to create the manipulated content," Wardle told Business Insider. "It's the impact - especially the harm that it has - that we should be focused on."
There are already laws that regulate the impact of pornographic deepfakes, including specific measures for revenge porn and digital harassment. Wardle argues that we should be using those existing laws to remedy the harm caused by deepfakes.
David Greene, the Electronic Frontier Foundation's civil liberties director, is similarly skeptical of deepfake legislation.
Greene added extortion, false light, and defamation to the list of laws that could already police deepfakes, depending on the creator's intent. Further, Greene says California's political deepfake law does not strike an appropriate balance between preventing harm and protecting free speech.
"The law is overbroad, vague, and subjective," Greene told Business Insider. "It hinges on whether the deepfake leads to a fundamentally different impression of the candidate, which is not specific enough, and could suppress speech."
Both the EFF and the ACLU wrote letters to Gov. Newsom, warning that the political deepfake law would not solve the problem and may only lead to more confusion.
The governor's office did not immediately respond to a request for comment regarding opposition to the law.
Assemblyman Berman's office issued a statement to Business Insider, and included a seperate letter from Erwin Chemerinksy, dean of UC Berkely School of Law, who supported the bill and noted that there are limits on First Amendment rights for false speech.
"While the First Amendment gives you the right to say whatever you want, it does not give you the right to put your words into my mouth, or to use AI technology to take my body and make it look like I did something I never did, which is what this new law addresses," Berman told Business Insider.
But Wardle and Greene also expressed concern over how the exceptions for satire and parody would be determined.
"As people have become increasingly concerned about the impact of disinformation, we've learned the challenges of legislating around content," Wardle told Business Insider. "It can have really worrying consequences on free speech."
Instead, Wardle and Greene agree that we need to place more emphasis on understanding the intent behind creating deepfakes.
"There has been a consensus that we should focus on the sources," Wardle said. "Who is creating the content? What are they aiming to achieve? Is it a coordinated campaign to manipulate? That's how we should think about these questions."