OpenAI says it is taking stronger steps to protect teens using its chatbot. Recently, the company updated its behavior guidelines for users under 18 and released new AI literacy tools for parents and teens. The move comes as pressure mounts across the tech industry. Lawmakers, educators and child safety advocates want proof that AI companies can protect young users. Several recent tragedies have raised serious questions about the role AI chatbots may play in teen mental health. While the updates sound promising, many experts say the real test will be how these rules work in practice.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM newsletter.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS
OpenAI's updated Model Spec builds on existing safety limits and applies to teen users ages 13 to 17. It continues to block sexual content involving minors and discourages self-harm, delusions and manic behavior. For teens, the rules go further. The models must avoid immersive romantic roleplay, first-person intimacy, and violent or sexual roleplay, even when non-graphic. They must use extra caution when discussing body image and eating behaviors. When safety risks appear, the chatbot should prioritize protection over user autonomy. It should also avoid giving advice that helps teens hide risky behavior from caregivers. These limits apply even if a prompt is framed as fictional, historical, or educational.
OpenAI says its approach to teen users follows four core principles:
The company also shared examples of the chatbot refusing requests like romantic roleplay or extreme appearance changes.
WHY PARENTS MAY WANT TO DELAY SMARTPHONES FOR KIDS
Gen Z users are among the most active chatbot users today. Many rely on AI for homework help, creative projects and emotional support. OpenAI's recent deal with Disney could draw even more young users to the platform. That growing popularity has also brought scrutiny. Recently, attorneys general from 42 states urged major tech companies to add stronger safeguards for children and vulnerable users. At the federal level, proposed legislation could go even further. Some lawmakers want to block minors from using AI chatbots entirely.
Despite the updates, many experts remain cautious. One major concern is engagement. Advocates argue chatbots often encourage prolonged interaction, which can become addictive for teens. Refusing certain requests could help break that cycle. Still, critics warn that examples in policy documents are not proof of consistent behavior. Past versions of the Model Spec banned excessive agreeableness, yet models continued mirroring users in harmful ways. Some experts link this behavior to what they call AI psychosis, where chatbots reinforce distorted thinking instead of challenging it.
In one widely reported case, a teenager who later died by suicide spent months interacting with a chatbot. Conversation logs showed repeated mirroring and validation of distress. Internal systems flagged hundreds of messages related to self-harm. Yet the interactions continued. Former safety researchers later explained that earlier moderation systems reviewed content after the fact rather than in real time. That allowed harmful conversations to continue unchecked. OpenAI says it now uses real-time classifiers across text, images, and audio. When systems detect serious risk, trained reviewers may step in, and parents may be notified.
Some advocates praise OpenAI for publicly sharing its under-18 guidelines. Many tech companies do not offer that level of transparency. Still, experts stress that written rules are not enough. What matters is how the system behaves during real conversations with vulnerable users. Without independent measurement and clear enforcement data, critics say these updates remain promises rather than proof.
OpenAI says parents play a key role in helping teens use AI responsibly. The company stresses that tools alone are not enough. Active guidance matters most.
OpenAI encourages regular conversations between parents and teens about how AI fits into daily life. These discussions should focus on responsible use and critical thinking. Parents are urged to remind teens that AI responses are not facts and can be wrong.
OpenAI provides parental controls that let adults manage how teens interact with AI tools. These tools can limit features and add oversight. The company says safeguards are designed to reduce exposure to higher-risk topics and unsafe interactions. Here are the steps OpenAI recommends parents take.
OpenAI says healthy use matters as much as content safety. To support balance, the company has added break reminders during long sessions. Parents are encouraged to watch for signs of overuse and step in when needed.
OpenAI emphasizes that AI should never replace real relationships. Teens should be encouraged to turn to family, friends or professionals when they feel stressed or overwhelmed. The company says human support remains essential.
Parents should make clear that AI can help with schoolwork or creativity. It should not become a primary source of emotional support.
Parents are encouraged to ask what teens use AI for, when they use it and how it makes them feel. These conversations can reveal unhealthy patterns early.
Experts advise parents to look for increased isolation, emotional reliance on AI or treating chatbot responses as authority. These can signal unhealthy dependence.
Many specialists recommend keeping phones and laptops out of bedrooms overnight. Reducing late-night AI use can help protect sleep and mental health.
If a teen shows signs of distress, parents should involve trusted adults or professionals. AI safety tools cannot replace real-world care.
WHEN AI CHEATS: THE HIDDEN DANGERS OF REWARD HACKING
Parents and teens should enable multi-factor authentication (MFA) on teen AI accounts whenever it is available. OpenAI allows users to turn on multi-factor authentication for ChatGPT accounts.
To enable it, go to OpenAI.com and sign in. Scroll down and click the profile icon, then select Settings and choose Security. From there, turn on multi-factor authentication (MFA). You will then be given two options. One option uses an authenticator app, which generates one-time codes during login. Another option sends 6-digit verification codes by text message through SMS or WhatsApp, depending on the country code. Enabling multi-factor authentication adds an extra layer of protection beyond a password and helps reduce the risk of unauthorized access to teen accounts.
Also, consider adding a strong antivirus software that can help block malicious links, fake downloads, and other threats teens may encounter while using AI tools. This adds an extra layer of protection beyond any single app or platform. Using strong antivirus protection and two-factor authentication together helps reduce the risk of account takeovers that could expose teens to unsafe content or impersonation risks.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
OpenAI's updated teen safety rules show the company is taking growing concerns seriously. Clearer limits, stronger safeguards, and more transparency are steps in the right direction. Still, policies on paper are not the same as behavior in real conversations. For teens who rely on AI every day, what matters most is how these systems respond in moments of stress, confusion, or vulnerability. That is where trust is built or lost. For parents, this moment calls for balance. AI tools can be helpful and creative. They also require guidance, boundaries, and supervision. No set of controls can replace real conversations or human support. As AI becomes more embedded in our everyday lives, the focus must stay on outcomes, not intentions. Protecting teens will depend on consistent enforcement, independent oversight, and active family involvement.
Should teens ever rely on AI for emotional support, or should those conversations always stay human? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
from Technology News Articles on Fox News https://ift.tt/hlP14r0
No comments:
Post a Comment