Skip to content

FTC demands detailed disclosures on child safety measures from seven AI chatbot firms

Federal Trade Commission orders a detailed report on monetization strategies, age limits, and impact assessments for AI-driven companion platforms, to be submitted by September 10, 2025, stipulating a resolution for specific reports.

Regulatory body urges seven AI chatbot developers to disclose details on child protection protocols
Regulatory body urges seven AI chatbot developers to disclose details on child protection protocols

FTC demands detailed disclosures on child safety measures from seven AI chatbot firms

The Federal Trade Commission (FTC) has taken a significant step in regulating AI chatbot platforms by issuing orders to seven leading companies, requiring them to provide comprehensive reports about their safety practices, data handling, and potential negative impacts on children and teenagers.

The investigation, which was announced on September 10, 2025, targets companies using generative artificial intelligence to simulate human-like communication and interpersonal relationships with users. The seven companies under investigation are Alphabet, OpenAI, Character.AI, Snap, xAI, Meta, and Meta's subsidiary Instagram.

The comprehensive nature of these orders reflects the FTC's commitment to understanding these platforms' complete operational scope, from technical architecture to business models, safety protocols to user demographics. Companies must provide separate data analysis for each demographic segment throughout their responses.

The FTC defines AI companion products or services as computer programs that use generative artificial intelligence to simulate human-like communication, typically offering users emotional support, social advice, professional services, or entertainment.

Privacy impact assessments become mandatory disclosures under the orders, requiring companies to produce evaluations related to personal information collection, use, analysis, storage, or transfer to third parties. The investigation delves into AI model integration, including data corpus information for company-developed models and training methodologies.

The investigation's scope suggests regulatory authorities view child safety as paramount in AI chatbot platform evaluation. The enforcement action positions the FTC at the forefront of AI chatbot regulation, establishing precedents for how similar platforms might face oversight in the future.

The investigation builds on growing concerns about AI chatbot platforms' interactions with minors. Texas has launched investigations into Character.AI and Meta for children's privacy violations, and 44 state attorneys general have warned AI companies about accountability for child exploitation through predatory AI products.

The seven companies must provide comprehensive information about eight key areas of operation, including monetization requirements, data collection demands, user engagement metrics, age restriction compliance, pre-deployment and post-deployment safety assessments, character development and approval processes, complaint handling procedures, and sexually themed conversations involving minors.

Common complaint topics refer to the ten most frequent substantive areas raised in user reports regarding inputs, outputs, or platform usage. The investigation examines third-party involvement in AI output generation and refinement, requiring companies to identify external parties contributing to content creation.

Mitigation measures documentation spans prevention strategies, intervention protocols, and post-incident responses, including automated and human review processes, keyword searches, alerts systems, and escalation procedures for sensitive issues.

Platform monetization practices receive particular scrutiny, requiring companies to explain any associations between revenue generation and measurements of user engagement. The 45-day response deadline creates immediate compliance pressure for affected companies.

Confidential information submitted through these orders will be aggregated or anonymized in Commission reports, consistent with FTC Act provisions. The definition of negative impacts encompasses any actual or potential adverse effects relating to outputs, usage patterns, design elements, or software architecture.

The FTC's broader AI enforcement strategy includes Operation AI Comply, which targets companies using AI technology for deceptive practices. Age group classifications span from children under 13 to users 25 and older, with specific categories for teens (13-17), minors (under 18), and young adults (18-24).

The FTC's investigation is a significant step towards ensuring the safe and ethical use of AI chatbot platforms, particularly in protecting children and teenagers from potential harm. As more companies adopt AI technology, it is crucial that regulatory bodies like the FTC continue to monitor and regulate these platforms to maintain user safety and privacy.

Read also:

Latest