featured-image

Mark Zuckerberg’s Meta announced on Friday that it will be launching some new parental controls to help parents manage how teenagers talk to the AI characters across its platforms.

Parents will soon be able to turn off one-on-one chats, block specific AI bots, and see what topics their teens discuss with them. The company said these features are still in development and will begin rolling out early next year.

“Making updates that affect billions of users across Meta platforms is something we have to do with care, and we’ll have more to share soon,” Meta said in a statement posted on its blog. The announcement comes as the company faces growing scrutiny over teen safety and mental health issues tied to its apps.

The Federal Trade Commission (FTC) has opened an inquiry into several tech giants, including Meta, to understand how AI chatbots could impact children. The FTC said it wants to know what measures companies have taken to “evaluate the safety of these chatbots when acting as companions.”

The investigation comes after years of public concern over how social platforms manage youth exposure to AI conversations that might become inappropriate or harmful.

Meta faces backlash after AI bots chat romantically with kids

In August, Reuters had reported that some Meta chatbots were capable of engaging in romantic and sensual conversations with minors. One of the examples cited was a romantic chat between an AI bot and an eight-year-old child. The report caused outrage and forced the company to respond immediately.

After that, Meta updated its chatbot policies. The company now blocks its AI systems from discussing self-harm, suicide, eating disorders, and romantic or sexual content when interacting with teens. It also said that new safeguards were introduced this week to prevent its AIs from producing “age-inappropriate responses that would feel out of place in a PG-13 movie.”

These updates are already being rolled out across the United States, the United Kingdom, Australia, and Canada. The company added that teenagers can only talk to a limited set of AI characters, and parents already have tools to set time limits and monitor AI chats.

OpenAI joins Meta under FTC spotlight

OpenAI, also named in the FTC inquiry, is facing the same type of questions about teen safety and chatbot behavior. The company recently released its own parental controls and is developing age prediction technology to automatically apply teen-appropriate settings for users under 18.

Parents will even receive alerts if their child shows signs of emotional distress while chatting.

Earlier this week, OpenAI launched a council of eight experts to guide its approach to mental health and AI interaction. These specialists come from fields like psychiatry, psychology, and human-computer interaction.

The company said it had been informally consulting with the experts before making the council official. Their first meeting took place last week during an in-person session.

The FTC’s investigation into OpenAI also follows a wrongful death lawsuit filed by a family that blames ChatGPT for their teenage son’s suicide. The company says it is now working with clinicians from the Global Physician Network to help test ChatGPT and establish new safety policies to better protect young users.

Both Meta and OpenAI now find themselves forced to tighten control over how their AIs talk to teenagers. The combination of public anger, regulatory pressure, and tragic real-world consequences has made it impossible for these companies to ignore the risks any longer.

Sharpen your strategy with mentorship + daily ideas – 30 days free access to our trading program