
Meta announced on Tuesday that all Instagram teen accounts will now default to a PG-13 content setting, aligning the platform’s content moderation with widely recognised film-rating standards. The change, which the company describes as its most significant teen safety update yet, limits what users under 18 can see and interact with across the app.
In a blog post, Meta said the move is intended to ensure that the experience of teens on Instagram “feels similar to watching a PG-13 movie,” where some mild suggestive material or language may appear, but explicit or harmful content remains off-limits. Teens under 18 will automatically be placed in the new 13+ content setting and cannot opt out without parental approval.
Why PG-13? Aligning with a familiar standard
According to Meta, the PG-13 framework provides a reference point many parents already understand. The company compared its new policies against film ratings and refined them to exclude a wider range of potentially inappropriate material.
This means that content featuring strong language, risky stunts, or references to adult behaviour, including alcohol, marijuana, or other restricted items, will now be hidden or not recommended to teens.
Meta cited an Ipsos survey it commissioned, revealing that 95 per cent of parents in the US believe the new settings will help create safer online experiences, while 90 per cent said they make it easier to understand what teens might see on the platform.
Introducing ‘Limited Content’ for extra parental control
For families seeking tighter restrictions, Meta is rolling out a new ‘Limited Content’ option. This setting blocks even more content types and disables features such as commenting entirely. It will also extend to AI interactions, ensuring that Meta’s chat-based features avoid inappropriate topics.
The company said 96 per cent of surveyed parents appreciated having the option to apply stricter filters, even if they do not plan to use them.
Expanding AI and content moderation systems
To enforce the new guidelines, Meta said it has upgraded its AI systems to better detect and filter material that violates the updated teen standards. Teens will no longer be able to follow or interact with accounts that share inappropriate content, nor will such accounts be recommended in search or feeds.
Instagram will also block mature or sensitive search terms, such as “alcohol” or “gore,” even when misspelled. If teens attempt to access content shared via direct messages that breaches these restrictions, they won’t be able to open it.
Parents invited to shape Instagram’s next steps
Meta claims to have built these updates based on direct feedback from parents worldwide, including over three million content ratings used to refine what counts as age-appropriate. The company plans to continue this collaboration through regular surveys and a new reporting tool, which lets parents flag content they believe should be hidden from teens.
In recent internal tests, fewer than two per cent of posts shown to teens were considered inappropriate by most parents, Meta said.
Global rollout underway
The PG-13-based content filters are rolling out first in the US, UK, Australia, and Canada, with full deployment expected by the end of the year. Meta plans to expand the feature worldwide in 2026 and extend similar protections to teens who falsely register as adults.
“These updates reflect our ongoing commitment to helping teens have safer experiences online,” Meta said, adding that it also plans to introduce similar age-based protections on Facebook in the coming months.