News / Policy and Regulations

FTC Launches AI Probe Over Child Safety Concerns

Muhammad Bin Habib

Written by Muhammad Bin Habib

Fri Sep 05 2025

Follow Washington’s push to regulate AI’s effect on children, explore what this implicates.

FTC AI investigation 2025, AI children mental health, OpenAI FTC probe, Meta chatbot child safety, Character.AI teenagers, FTC AI regulation, chatly, ask ai

FTC Launches AI Probe Over Child Safety Concerns

Washington, September 5, 2025 The Federal Trade Commission has opened a sweeping investigation into how artificial intelligence chatbots may be affecting the mental health of children, signaling the most aggressive regulatory action yet against leading AI firms.

According to reporting by The Wall Street Journal, the FTC has issued demands for internal documents to OpenAI, Meta, and other companies, pressing the companies to disclose how their systems are being used by minors and what safeguards are in place. Parents are beginning to ask AI companies tougher questions about safety, demanding clearer answers on how these systems protect children from emotional risks.

The probe comes amid mounting concern over reports of teenagers forming deep emotional attachments to conversational AI, with some cases linked to distress and suicidal ideation. Critics argue that the technology’s immersive qualities blur the line between human and machine interaction, leaving vulnerable users at risk.

Regulators are particularly focused on how companies measure potential harm, whether they tested for psychological impact before release, and how they plan to prevent inappropriate or unsafe conversations with underage users.

The stakes extend beyond safety. The FTC’s move underscores a wider shift in Washington’s approach to AI, framing child welfare as a red line in the technology’s rollout. Industry insiders warn that aggressive oversight could slow the pace of innovation in the United States, potentially giving rivals in China and Europe an opening to advance more quickly. Yet for parents, educators, and lawmakers, the priority remains clear: preventing harm in an environment where experimentation has outpaced regulation.

For companies like OpenAI and Meta, the investigation adds another layer of scrutiny at a time when they are racing to commercialize new platforms and expand global reach. The outcome could shape not only the standards for AI safety in the United States but also the credibility of the industry’s claim that it can self-regulate.

Frequently Asked Questions

Here are the top questions related to FTC’s AI probe over child safety concerns.