Meta has filed a lawsuit in Hong Kong against Joy Timeline HK Limited, the company behind CrushAI, an app accused of creating and promoting fake sexually explicit images of people without their consent.
This follows thousands of questionable ads bypassing Meta’s screening process and appearing across Facebook and Instagram.
Between 1st and 14th January 2025 alone, CrushAI reportedly managed to push over 8,000 ads for its so-called “AI undresser” through Meta’s platforms. These ads linked users to sites that used artificial image manipulation to simulate nudity.
Most of the app’s traffic, around 90%, according to Alexios Mantzarlis of the Faked Up newsletter, came directly from Meta-owned platforms.
The scale and frequency of these violations are not only a policy breach but a direct attack on user safety and digital dignity. Meta says it repeatedly removed these ads, but Joy Timeline HK continued, setting up new accounts and domains faster than the company could block them.
In one case, ad accounts appeared under names like “Eraser Annyone’s Clothes” with a rotating list of numeric identifiers.
“I flagged several of these websites to Meta myself,” Mantzarlis wrote in January, highlighting how even public reporting didn’t immediately stop the flood of inappropriate content.
Meta’s frustration appears to have reached a boiling point. The lawsuit is a shift from internal enforcement to legal confrontation. It also notes a recognition that digital safety measures must evolve faster to deal with bad actors who are usually more agile and less bound by ethical or legal constraints.
These “nudify” tools have become a a challenge across the internet. Platforms like X, Reddit, YouTube, and even app stores have seen a surge in such services, with ads targeting users indiscriminately.
TikTok and Meta have both banned search terms like “nudify” and “undress,” but policing this content has proven far more difficult in practice.
To stay ahead, Meta says it has built new detection systems capable of identifying problematic ads even when no explicit imagery is used. These systems use matching technology and a larger database of flagged terms and symbols to uncover deceptive ad content that previously went undetected.
The company is also disrupting coordinated ad networks. Since January 2025, Meta claims it has dismantled four major clusters of fake advertiser accounts promoting nudify services. These operations mirrored tactics used by disinformation and fraud networks, rapid domain switching, coordinated accounts, and evasion of AI filters.
“Today, we’ve filed a lawsuit against the entity behind CrushAI and are taking other steps to clamp down on nudify apps,” Meta said in a statement. “We have strict rules against non-consensual intimate imagery – whether it’s real or AI-generated – including the promotion of nudify apps.”
Meta has begun sharing intelligence with other tech companies through the Tech Coalition’s Lantern programme, a partnership involving platforms like Google and Snap to tackle child exploitation. Since March, Meta has supplied more than 3,800 offending URLs for review and takedown.
The firm is also lobbying for stronger legislative frameworks. In the U.S., Meta backed the Take It Down Act, which gives parents more control over app downloads and aims to limit children’s exposure to harmful tools.
The company says it’s working closely with lawmakers to put these measures into effect globally.