Last week, Taylor Swift filed a trio of trademark applications to protect her image and voice. One is meant to cover a well-known photograph of the pop singer holding a pink guitar during a concert on her record-breaking Eras tour, while the two sound trademarks are for simple identifying phrases: “Hey, it’s Taylor Swift,” and “Hey, it’s Taylor.”
The move comes as AI deepfakes continue to proliferate across social media. Any individual stands to have their likeness exploited in the creation of nonconsensual AI-generated material; earlier this month, an Ohio man was the first person convicted under a new federal law criminalizing “intimate” visual deceptions of this sort. Celebrities, meanwhile, find themselves at risk of both explicit deepfakes and false endorsements.
A new report from AI detection company Copyleaks shows that Swift and other stars have recently had their likenesses used in scammy advertisements. Researchers identified a cluster of sponsored videos on TikTok that appeared to show Swift, Kim Kardashian, Rihanna, and others promoting “potentially fraudulent or malicious services,” with the clips making use of what the researchers call “realistic-sounding voices” as well as “textured filters meant to mask some of the flaws in the AI-generated visuals.”
The fake ads show Swift et al. in what seem to be common interview settings—red carpet events or talk show sets. Rather than answering questions, however, the AI-generated celebrities talk up supposed rewards programs in which TikTok users are paid for offering feedback on content served to them.
“I was reading about digital behavior this week and came across a testing feature called TikTok Pay,” says a deepfaked Swift in an ad that uses manipulated footage from an appearance the real Swift made on The Tonight Show Starring Jimmy Fallon in October. “Certain users are being invited to watch videos and submit opinions.” The deepfaked Swift goes on to say that the program is in “limited rollout” for the moment but encourages viewers to see if they qualify for it, adding: “If the page opens for you, don’t overthink it.”
Naturally, anyone who clicks is accepted. These ads eventually lead the user to a third-party service that, despite the TikTok name and logo, has evidently been vibe-coded using the AI platform Lovable, whose own branding appears on the page and in the URL. At this point, the researchers say, the user is prompted to begin entering their name and personal information.
While it’s not clear what the advertisers intend to with all the data mined through their celebrity deepfake promotion, scam ads with similar objectives are exceedingly common. Last week, the nonprofit Consumer Federation of America sued Meta, alleging that the tech giant misled Facebook and Instagram users about its efforts to crack down on scam ads—and profited by allowing them to proliferate. On Monday, the US Federal Trade Commission reported that social media scams have surged overall, with Facebook scams accounting for the highest total of financial losses.
It’s no surprise that Swift and her peers are taking legal steps to distance themselves from this fraudulent economy. While Swift hasn’t publicly commented on the reasoning behind her trademark filings, the reputational damage that deceitful deepfakes pose to her billion-dollar brand can hardly be overlooked. The trouble is, they grow more sophisticated by the day.


