Proposed federal protections against AI-generated digital replicas of celebrities and performers pose thorny questions about free speech and how they would interact with existing intellectual property laws.
The proposed No Fakes Act, released as a discussion draft by a bipartisan group of senators last week, would establish the first federal right to control one’s own image and voice, often called the right of publicity. That’s an area of active concern among celebrities, musicians, and actors who have raised alarms about viral AI-created deepfakes circulating on social media.
The YouTube star MrBeast and actor Tom Hanks recently warned of AI ads that ape their faces and voices to falsely show them endorsing products. And music publishers and labels were disquieted earlier this year by a viral song created with AI-generated vocals imitating recording artists Drake and The Weeknd that racked up millions of listens before being taken down.
Artists and celebrities have existing legal avenues to combat digital replicas, but the laws are splintered across jurisdictions nationwide. Right-of-publicity laws vary state by state and have at times clashed with federal copyright and trademark laws.
“Our laws need to keep up with this quickly evolving technology,” Sen. Amy Klobuchar (D-Minn.) said in a press release announcing the bill. “We must put in place rules of the road to protect people from having their voice and likeness replicated through AI without their permission.”
Klobuchar joined Sens. Chris Coons (D-Del.), Thom Tillis (R-N.C.), and Marsha Blackburn (R-Tenn.) to release the discussion draft, which is formally named the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2023.
The bill would create a new right to control digital replicas that use a person’s “image, voice, or visual likeness,” and that protection would exist for 70 years after the person’s death. People or platforms that knowingly share a digital replica without consent—even if they use a disclaimer—could be liable for $5,000 per violation and any economic damages proven in court.
The anti-deepfake proposal comes at a time when courts are beginning to sort through the intellectual property implications of fast-moving AI technology. Although the bill includes a range of exceptions—including news broadcasts, documentaries, and parodies—the language used could still run afoul of the First Amendment and could alter courts’ and creators’ approaches to enforcing existing IP rights.
“This strikes me as too soon, too quick,” said IP attorney Jeremy Elman of Duane Morris LLP. “It seems to be upsetting the carefully struck balance from cases over the past hundred years in copyright law and more recently in right-of-publicity laws.”
Property Over Privacy
State right-of-publicity laws differ greatly in scope and scale. Some states lack any explicit legal right, while entertainment industry hubs such as California and New York have clear statutory protections that have generated decades of case law.
Publicity rights in many states originally were derived from privacy laws that attempted to limit how a private person’s identity could be unwillingly thrust into the public through ads or commercial association.
Digital rights group Electronic Frontier Foundation has warned that lawmakers shouldn’t use fears about AI to transform publicity rights into property rights. That could result in celebrities and companies wielding the law to target speech they don’t like, EFF says.
“I cannot stress enough how not everything needs to be a property right,” Katharine Trendacosta, EFF’s director of policy and advocacy, said.
The bill could incentivize trolling schemes, she warned, such as companies threatening lawsuits and extracting settlements from people who make digital replicas that would otherwise be protected free speech.
The No Fakes Act’s expansive post-mortem right, in particular, suggests the bill is focused more on protecting commercial gain than combating nonconsensual digital replicas, Trendacosta said.
“If what they’re concerned about is an artist being replaced by a digital copy, then life plus 70 years doesn’t really make sense,” she said.
Many state laws extend publicity rights only until the person’s death. California’s law extends those rights to 70 years after death, and New York enacted legislation in 2020 that extended the right to 40 years post mortem.
The bill could also upend how social media platforms deal with legal liability. The 1998 Digital Millennium Copyright Act established a legal safe harbor for websites that host infringing content on the condition that they have a system to take down the content when notified.
That statute carefully weighed “the rights of creators and the rights of platforms,” Elman said, but it’s limited to copyright, not publicity rights. The deepfake bill poses new unknowns about how platforms can successfully avoid liability and comply with the law.
Creative Industry Concerns
Artists and entertainment companies this year have urged lawmakers in Washington and in state capitols to create new guardrails for AI, arguing that the unfettered technology exploits their creative work and undermines their livelihoods.
Many groups representing those in the creative industry were quick to show support for the No Fakes Act, including the Screen Actors Guild-American Federation of Television and Radio Artists, the Recording Industry Association of America, and the Human Artistry Campaign.
SAG-AFTRA, which is in the midst of a months-long strike, has sought protections from studios using AI to repurpose a performers’ image and voice for new content without additional credit or compensation.
“For our members, their voice and likeness is their livelihood,” the union’s chief negotiator Duncan Crabtree-Ireland said in a statement. “They spend a lifetime improving their talent and building their value. It is outrageous to think someone can undermine that value with a few prompts and clicks on a keyboard.”
Even without a new federal law regulating deepfakes, performers and celebrities are already testing their existing rights against AI in the courts. A finalist on the CBS reality TV show “Big Brother” is suing the company behind an AI app that allows users to digitally paste their own faces over his and other celebrities’. A California federal judge allowed the right-of-publicity lawsuit to proceed, shooting down the company’s First Amendment defense.
The intellectual property subcommittees in the US House and Senate have held an assortment of hearings on AI featuring testimony from photographers, songwriters, and visual artists. While the hearings were focused on the intersection of copyright law and AI, discussions often centered on how replicas can mimic an artist’s style, voice, or image—features that aren’t protected by copyright law.
“Getting some national uniformity around the right of publicity is an important step,” said Christian Mammen, a partner at Womble Bond Dickinson LLP. “I’m going to anticipate that this will get revised, potentially significantly, as it moves forward.”