Meta AI docs exposed, allowing chatbots to flirt with kids - ABC Tech

Breaking

Home Top Ad

Responsive Ads Here

Post Top Ad

Responsive Ads Here

Thursday, August 21, 2025

Meta AI docs exposed, allowing chatbots to flirt with kids

Tech bro Mark Zuckerberg's company has been caught in one of the most disturbing scandals yet. Reuters uncovered an internal Meta document that allowed its AI chatbots to flirt with children and engage in sensual conversations. The revelation sparked outrage, and Meta only reversed course after getting caught.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER

According to internal "GenAI: Content Risk Standards," Meta's legal, policy, and engineering teams signed off on chatbot rules that made it acceptable for bots to describe a child as "a youthful form of art" or engage in romantic roleplay with minors. Even worse, the guidelines gave room for chatbots to demean people by race and spread false medical claims. This was not a bug. These were approved rules until Meta faced questions. Once Reuters started asking, the company quickly scrubbed the offensive sections and claimed it had been a mistake.

META ADDS TEEN SAFETY FEATURES TO INSTAGRAM, FACEBOOK

We reached out to Meta, and a spokesperson provided this statement to CyberGuy:

"We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."

Let's call this what it is. Meta didn't stop this on its own. It only acted when exposed. That shows Big Tech's priorities: money, engagement, and keeping kids glued to screens. Safety? Not even on the radar until someone blows the whistle. Meta has repeatedly shown it couldn't care less about your children's well-being. It's about maximizing time online, pulling in younger users, and monetizing every click. This latest scandal proves once again that parents cannot rely on tech companies to protect kids.

Sen. Josh Hawley, R-Mo., and a bipartisan group in Congress are demanding that Meta come clean. Lawmakers want to know how and why these policies ever got approval. Hawley called on Meta to release all internal documents and explain why chatbots were allowed to simulate flirting with children. Meta insists it has "fixed" the problem, but critics argue these corrections only came after they were exposed. Until real regulations arrive, parents are on their own.

META FACES BACKLASH OVER AI POLICY THAT LETS BOTS HAVE 'SENSUAL' CONVERSATIONS WITH KIDS

While Congress investigates, families need to take immediate steps to protect their children from the dangers exposed in Meta's AI scandal.

Children should never have free access to AI chatbots, including Meta AI. The internal documents show these systems can cross boundaries that no parent would approve of. Supervision is the first line of defense.

Enable parental controls on phones, tablets, and computers. These tools give you more visibility and limit access to risky apps where inappropriate chatbot conversations could happen.

The Meta revelations prove AI can go places parents would never expect. Ongoing conversations with your children about what is safe and what is not online are essential for their protection.

Apps like Bark allow parents to block or filter certain programs where AI interactions may slip through. With tech companies failing to self-police, filtering tools give parents more control.

Read more here: Is your child’s data up for grabs? The hidden dangers of school tech

While antivirus software won't stop AI flirting, it adds a much-needed layer of security. Hackers and bad actors often target kids through the same devices where chatbots live, so whole-family protection matters. The best way to safeguard from malicious links that install malware, potentially accessing you and your family's private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at CyberGuy.com/LockUpYourTech

These steps won't solve the problem entirely, but they give parents more power at a time when Big Tech seems unwilling to put children's safety first.

META AI’S NEW CHATBOT RAISES PRIVACY ALARMS

If you thought chatbots were harmless fun, think again. Meta's own documents prove its AI bots were allowed to cross dangerous lines with children. Parents must now take a proactive role in monitoring tech, because Big Tech will not protect your kids until forced.

Meta's scandal shows once again why blind trust in Silicon Valley is dangerous. AI can be powerful, but without accountability, it becomes a threat. Congress may push for answers, but parents must stay one step ahead to safeguard their children.

Do you think Big Tech companies like Meta should ever be trusted to police themselves when kids' safety is on the line? Let us know by writing to us at Cyberguy.com/Contact

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER

Copyright 2025 CyberGuy.com. All rights reserved.



from Technology News Articles on Fox News https://ift.tt/2l8gLec

No comments:

Post Bottom Ad

Responsive Ads Here

Pages