As advancements in artificial intelligence proliferate, talent agencies are bulking up their defenses to protect Hollywood stars against misleading, manipulated images or videos that can put them at risk.
The rise of generative AI and “deepfakes” — or videos and pictures that use a person’s image in a false way — has led to the wide proliferation of unauthorized clips that can damage celebrities’ brands and businesses.
These clips purport to show famous people saying and doing things they never said or did. For example: fake nudes of a famous person, or videos crafted to make it look like a Hollywood star is endorsing a product they haven’t actually used. And the problem is expected to grow.
Now there are technological tools that use AI to combat that threat, and the entertainment industry has come knocking.
Talent agency WME has inked a partnership with Loti, a Seattle-based firm that specializes in software used to flag unauthorized content posted on the internet that includes clients’ likenesses. The company, which has 25 employees, then quickly sends requests to online platforms to have those infringing photos and videos removed.
Financial details of the deal were not disclosed.
Artificial intelligence has been seen as both friend and foe in Hollywood — a tool that could potentially make processes more efficient and inspire new innovations, but is also seen as a job killer and yet another way for intellectual property to be stolen.
The need for better protections against AI played a central role in last summer’s strikes by the Writers Guild of America and actors guild SAG-AFTRA. On Tuesday, the nonprofit Artist Rights Alliance posted an open letter to technology companies demanding that they “stop devaluing” their work, with signatures from 200 musicians including Billie Eilish and Elvis Costello. As deepfakes multiply, agencies are hoping to use AI to stop the bad actors online.
“The worst game of whack-a-mole you are going to play is dealing with the deepfake problem without a technology partner to help you,” said Chris Jacquemin, WME partner and head of digital strategy.
Read more:AI companies are courting Hollywood. Do they come in peace?
Loti co-founder Luke Arrigoni launched the startup about a year and a half ago. He previously ran an artificial intelligence firm called Arricor AI and before that was a data scientist at Creative Artists Agency, WME’s main rival.
Arrigoni said Loti began working with WME about four or five months ago. WME clients give Loti a few photos of themselves from different angles. They also record short audio clips that are then used to help identify unauthorized content. Loti’s software searches the web and reports back to the clients about these unauthorized images and sends takedown requests to the platforms.
“There’s this kind of growing feeling that this is an impossible problem,” Arrigoni said. “There’s this almost adage now where people say, ‘Once it’s on the internet, it’s on the internet forever.’ Our whole company dispels that myth.”
Arrigoni declined to say the financial terms of the partnership or how many WME clients are using Loti’s technology.
Prior to using Loti’s technology, Jacquemin said, his agency’s staff would have to fight the problem of deepfakes on a much more ad-hoc basis. They’d have to ask web platforms, such as YouTube and Facebook, to take down unauthorized materials based on what they saw while browsing or what they heard through their clients, whose fans would flag doctored material.
Loti’s technology provides more visibility into the issue. There may be circumstances in which not all unauthorized content will be taken down, depending on the client’s wishes. But at least the performers will know what’s out there.
Read more:Tom Hanks disavows AI clone amid Hollywood’s robot reckoning
Back in 2022, companies such as Meta and Google were already dealing with takedowns of billions of ads or ad accounts that violated their deception policies, Jacquemin said.
Now, more people in Hollywood are concerned about how newer AI models, some of which in part are trained with publicly available data, could potentially use copyrighted works. These technologies could further blur the lines between what’s real and fake.
If harmful fake content were to be kept up for too long, it could hurt a client’s business opportunities and commercial endorsements.
“They’re so realistic that it would be hard for most people to know the difference,” Arrigoni said.
This is the latest partnership WME and its parent company Endeavor have made with an AI-related company. In January, WME partnered with Chicago-based startup Vermillio to protect clients against IP theft, detecting when generative AI content uses a client’s likeness or voice.
Endeavor is a minority investor in Speechify, which makes text-to-speech technology. Endeavor Chief Executive Ari Emanuel used Speechify’s tool to create a synthetic version of his voice, which gave the opening remarks on an Endeavor earnings call last year. (On Tuesday, Endeavor announced that its largest shareholder, Silver Lake, will take the company private in a deal valuing it at $13 billion.)
So far, Loti is self-funded, Arrigoni said. He said he put $1 million into the company himself. The firm is currently in the process of raising an undisclosed amount for a seed round.
Read more:AI is here, and it’s making movies. Is Hollywood ready?
This story originally appeared in Los Angeles Times.