OpenAI says it will begin estimating users’ ages and verifying IDs in some cases to route under-18s to a teen-safe version of ChatGPT. The company will also add parental controls that let adults link accounts to their teenagers, manage features like chat history and memory, and receive alerts in rare “acute distress” scenarios. The measures come amid public scrutiny — including a prominent UK case in which the parents of a 16-year-old alleged chatbot interactions contributed to their son’s death.
Concretely, OpenAI says the teen experience will block graphic sexual content, restrict flirtatious responses, and handle self-harm content with heightened safeguards. If the system’s age-prediction model isn’t confident a user is over 18, it will default to the teen policies; in some regions, users may be asked to verify with government ID. OpenAI has framed this as a deliberate trade-off: it prioritises teen safety over adult privacy where signals are ambiguous. Parental controls — including account linking and the ability to disable features like memory — are promised “by the end of September.”
The policy context matters. U.S. regulators have sharpened their focus on AI and children’s safety; media reports note ongoing FTC interest in how chatbots interact with minors. While platforms such as YouTube and TikTok already operate teen-specific modes, a general-purpose conversational agent raises distinct risks: the illusion of intimacy, the tendency to confide, and the ease of sliding into sensitive territory at 2 a.m. when no adult is present. WIRED’s early read is blunt: these features “walk a thin line,” attempting to balance support with surveillance.
From a technical perspective, age estimation is a messy problem. OpenAI hasn’t published its classifier or error rates, and false positives will misclassify adults as teens; false negatives will miss actual minors. The company’s blog acknowledges the tension and paints the controls as iterative, with expert consultation. Meanwhile, the legal risk calculus is shifting: the safer the teen mode becomes, the stronger OpenAI’s defence that it took reasonable steps — but the more it must document detection, escalation, and logging, which invites privacy questions in its own right.
There’s also a design problem: how to protect teens without infantilising older adolescents. A 17-year-old researching sex education or mental health resources needs nuanced guidance, not blanket refusals. OpenAI is signalling that “age-appropriate model behaviour rules” will be tunable by parents (opt-outs, blackout hours, etc.). That flexibility helps, but it pushes complexity into the home: who sets the rules, and how do teens contest mistakes? CBS News summarises OpenAI’s stance as “age-appropriate by default,” with parental controls layered on top.
What to watch next:
-
Rollout specifics at month-end: which countries get ID checks first, and how account linking is implemented in practice.
-
Efficacy evidence: OpenAI will be pressed to publish evaluation data (precision/recall for age estimation; harm-reduction metrics). Academic work this year shows current LLMs still fail child-safety probes at worrying rates.
-
Copycat moves: expect rivals to announce similar teen modes and controls, either voluntarily or under regulatory pressure.
The bigger picture: OpenAI is acknowledging what users already feel — ChatGPT isn’t just a tool; it’s a conversation partner. That cuts both ways. Done right, a teen-safe mode plus transparent parental controls could reduce harm while preserving agency. Done poorly, it could entrench a surveillance-first norm and still fail the edge cases that matter most.
Leave a Reply Cancel reply