Meta has announced a new wave of security measures aimed at shielding teenagers and child-focused accounts from abuse on Instagram, in a bid to end inappropriate content and interactions involving minors.
In response to issues over online child exploitation, sextortion, and grooming tactics that continue to surface across digital platforms, the new measures strengthen protections in two key areas: direct messaging features for teen users and account settings for profiles managed by adults that prominently feature children.
Instagram’s teen accounts will now come with more explicit safety tools within direct messages (DMs). Users will be able to see when the account messaging them joined Instagram, access instant safety tips, and use a new feature that combines the block and report options in a single action. The aim is to make it easier for teenagers to cut off suspicious contacts and flag potential violators.
Meta said these additions build on existing safety notices that encourage caution during private chats. Data from June shows that teens took action after receiving these alerts, blocking over 1 million accounts and reporting another 1 million.
Also introduced is the “Location Notice,” which alerts users if they are communicating with someone in a different country, a feature designed to disrupt common tactics used in sextortion scams. The company says one in ten users tapped on the notice to learn more about their options for protection.
Nudity protection, another key feature, remains turned on by default for teen accounts. Meta disclosed that 99% of users, teens included, have opted to keep the filter active. In June alone, more than 40% of blurred images sent via DMs remained unopened, helping reduce exposure to graphic content.
“In May, people decided against forwarding around 45% of the time after seeing this warning,” Meta added.
In a parallel effort, Meta is extending parts of these protections to Instagram accounts managed by adults that focus heavily on children. These include parent-run profiles and accounts controlled by talent managers of young influencers.
While Instagram prohibits under-13s from independently owning accounts, exceptions are made for accounts marked as being adult-managed on the child’s behalf.
Unfortunately, these accounts have also been targeted by abusers. According to Meta, some users have left sexualised comments or sent inappropriate DMs in violation of platform rules. The company is responding by automatically applying its strictest message settings to these profiles and enabling “Hidden Words,” a feature that filters out offensive language in comments.
Users managing such accounts will receive a notification at the top of their feed alerting them to these changes and prompting them to review their privacy settings.
Meta is also taking steps to make these child-focused accounts harder to discover for adults flagged as suspicious, especially those previously blocked by teens. The platform will prevent such adults from finding these profiles via search, hide their comments, and avoid suggesting either party to each other through recommendations.
This builds on previous restrictions, including blocking such accounts from offering subscriptions or receiving digital gifts.
So far this year, Meta has taken down nearly 135,000 Instagram accounts for violating child protection rules, specifically those caught soliciting sexual content or making inappropriate comments on accounts featuring children. A further 500,000 accounts on both Facebook and Instagram connected to those flagged profiles were also removed.
“While these accounts are overwhelmingly used in benign ways, unfortunately, there are people who may try to abuse them, leaving sexualised comments under their posts or asking for sexual images in DMs, in clear violation of our rules,” Meta wrote.
Meta has not limited its response to its own platform. The company is sharing intelligence about abusive accounts with other tech firms through the Tech Coalition’s Lantern programme, acknowledging that predators often operate across multiple services.
This announcement also lands as regulators and child safety advocates, including the U.S. Surgeon General, among others, have criticised platforms like Instagram over their mental health impact on minors. Some U.S. states have even introduced laws requiring parental consent for underage social media use.