OPINION
Voices from the Idaho EdNews Community

In October, Meta announced a new “teen AI safety” approach for Instagram, Facebook, and its other platforms. The headline change is simple but important: parents will soon have a built-in “kill switch” for one-on-one chats between their teens and AI characters. Additionally, it’ll provide better insight into what their teens are doing with AI on Meta’s apps.

This is exactly the type of voluntary, market-driven parental tool we highlighted in The Parental Guide to Understanding Digital Media Protection Resources. It is a good opportunity to repeat a core finding from that study: “The importance of combining these technological tools with informed, dedicated parenting cannot be overstated.”

Meta’s new AI safety plan for teens rests on two main pillars: 1) new parental controls, and 2) expanded default protections for teen accounts.

First off, Meta plans to establish a method for parents to turn off one-on-one chats with AI characters entirely for their teens. The general Meta AI assistant will still be available for homework help and basic questions, but with teen-specific safeguards layered on top. Additionally, the tool intends to block specific AI characters if parents are comfortable with AI in general, but not with certain personalities or role-play style bots.

These changes come after public criticism and news reports about AI chatbots having flirty or inappropriate interactions with minors, and after growing regulatory scrutiny of youth “companion” chatbots across the tech sector.

Secondly, Meta is also layering its AI controls on top of existing teen protections, like creating “PG-13 inspired” responses. This will ensure AI answers for teens are supposed to stay within guardrails modeled loosely on PG-13 movie standards, with limits on graphic or adult content. Parents will continue to be allowed to see whether their teens are chatting with AIs and set overall time limits on app usage, even as low as 15 minutes per day.

The company plans to roll these new supervision tools out first on Instagram, in English, in the U.S., U.K., Canada, and Australia, within early 2026.

As with all online safety tools, it will be important to closely monitor that it is functioning as advertised.

In our social media safety study, we looked at the tools parents already have available before the government ever passes a new law:

  • Device-level parental controls that ship with phones, tablets, and game systems;
  • App-store level settings;
  • Platform-level tools from social media companies; and
  • Third-party filtering, monitoring, and time-management services.

Voluntary parental tools are the most flexible, constitutional, and innovation-friendly path for digital safety.

Meta’s new AI tools validate the Parental Tools framework in several ways. They are opt-in and parent-directed, not government-directed. Parents can choose which features to use, how strict to be, and what conversations to have with their teens.

This sits alongside competing solutions, from device-level settings to alternative “kid-safe” phones and platforms, giving families options rather than a single model for everyone. They can be used and improved faster than legislation, which means real-world failures can be addressed without waiting years for Congress or a state legislature to act.

There are open questions about how well the tools will perform, whether defaults are strong enough, and whether teens will find clever ways around the guardrails. There are also legitimate concerns about data collection and the long-term effects of AI “companions” on teen mental health.

As AI evolves, there will be constant pressure for the government to “step in and fix it” with sweeping and often patchwork rules. Some targeted reforms may be appropriate. But we should be very clear that while the government can help support parents, it must not try to be the parent.

Sebastian Griffin

Sebastian Griffin

Sebastian Griffin is the lead researcher for the Junkermier Center for Technology and Innovation at Mountain States Policy Center, an independent research organization based in Idaho, Montana, Eastern Washington and Wyoming. Online at mountainstatespolicy.org.

Get EdNews in your inbox

Weekly round up every Friday