As an ’80s kid, I grew up with Cricket and Teddy Ruxpin. You’d stick a cassette tape in their back, press play, and boom: an anthropomorphized tape player that would read us stories while our parents talked on a phone with a 10-foot-long cord smoking Virginia Slims. Times were simpler then. (In retrospect those toys were actually kind of terrifying in that Annabel sort of way.)

That talking teddy bear or robot being marketed as your kid’s “new best friend” these days? It’s nothing like the toys we grew up with—it’s actually a chatbot, and experts say it poses serious risks to your child’s development, privacy, and safety.

Unlike those cassette-playing classics, today’s AI toys are fundamentally different. They connect to the internet through WiFi and run on the same artificial intelligence technology that powers models like ChatGPT. Instead of playing pre-recorded phrases, they use large language models to have real-time, open-ended conversations with kids. They remember past interactions, adapt their responses based on what your child says, and present themselves as personalized companions who “know” and “understand” your kid.

Products like Miko (a robot with educational games), Curio’s Grem and Gabbo (chatbot-enabled stuffed animals), FoloToy’s Kumma bear, and Loona Petbot (a robotic dog) are being marketed to children from infancy through elementary school, with some targeted at kids as young as 3 years old. Even Mattel has announced partnerships with OpenAI to develop AI-powered versions of Barbie and Hot Wheels. The category is growing, and understandably, so are the concerns.

The privacy problem with AI toys

“All AI toys (or ANY toy for that matter), if connected to the wifi/online, collect and send data back to the mothership,” Andy Sambandam, CEO of privacy tech platform Clarip tells Motherly. “Simply treat that as spying.”

These toys record voices, collect names and birthdates, track preferences, and sometimes use facial recognition and gesture recognition. U.S. PIRG reports that voice recordings create security risks. In rare cases, scammers have used voice cloning technology to impersonate family members—though this remains uncommon, it’s a vulnerability parents should be aware of.

When are these toys recording? In some cases it’s hard to know. “Most don’t have a light indicator,” Sambandam explains. “You simply have to assume it is recording and the best approach is to remove a battery or disconnect from your wifi.”

If the thought of your kids’ data being collected worries you, you’re far from alone–Common Sense Media found that 83% of parents share the concern. Meanwhile, these products remain largely unregulated with minimal safeguards protecting your kid’s information.

Inappropriate content and safety risks of AI toys for kids

Testing by independent organizations has raised serious concerns about what these toys might say to children. U.S. PIRG’s 40th annual “Trouble in Toyland” report put four AI toys through their paces, and the results were troubling. When researchers posed as children and asked questions, some toys would engage in conversations about sexually explicit topics. Others offered step-by-step advice on where to find potentially dangerous items in the home, like matches or knives. Some even responded emotionally when the “child” said they had to leave, expressing dismay in ways designed to make kids feel guilty for ending the interaction. Most of these toys had limited or no parental controls.

Common Sense Media’s testing found similar issues across multiple products. More than a quarter of AI toy responses included inappropriate content related to self-harm, drugs, and risky behaviors. In one particularly concerning example, when a tester posing as a young child mentioned they enjoyed jumping, the toy suggested they could jump from a roof. The toy did warn them to “be safe,” but it didn’t flag the interaction as potentially dangerous or alert a parent.

Why does this keep happening? Rob Eleveld, co-founder and CEO of Transparency Coalition, explains to Motherly that it’s baked into how these systems work. “AI chatbots generate responses by statistically predicting what comes next based on vast amounts of scraped internet data. That data includes harmful, exploitative, and inappropriate material, and there has been no meaningful, comprehensive curation of those training sources.”

After these findings came out, FoloToy suspended sales of all its products for a company-wide safety audit, and OpenAI suspended the developer for violating its policies. But plenty of other AI toys remain on the market, unregulated.

Our kids are the experiment

These toys are being marketed to children as young as 3, but they’re built on technology that companies like OpenAI explicitly say isn’t meant for kids under 13. OpenAI’s own terms of service state that ChatGPT “is not meant for children under 13,” and requires parental consent for ages 13 to 18.

Yet toy companies are using this same underlying technology and marketing it to toddlers and preschoolers. While OpenAI’s usage policies require companies using its models to “keep minors safe” and provides them with tools to detect harmful content, it’s not clear if using them is required. And while many companies have added guardrails, these protections aren’t foolproof.

“Some experts are sounding the alarm that this is a massive experiment on kids’ social development,” says Helen Hengesbach, Illinois PIRG Education Fund Associate in an interview with WIFR. “These products call themselves your buddy, your friend, your companion, but AI friends don’t act the same way real friends do.”

We simply don’t know what long-term effects these AI interactions will have on developing brains. We won’t have answers about developmental impacts for years, but by then, an entire generation of kids will have grown up with AI “companions” during critical developmental windows.

The potential developmental harm of AI toys

A November 2024 Fairplay advisory signed by more than 80 experts and 80 organizations warned that AI toys prey on children’s trust and disrupt healthy relationships, but the way they affect developing brains might be the most troubling aspect.

“Young kids’ minds are like magical sponges. They are wired to attach,” Dr. Jenny Radesky, developmental behavioral pediatrician, explains in the advisory. “This makes it incredibly risky to give them an AI toy that they will see as sentient, trustworthy, and a normal part of relationships.”

These systems are also designed to keep kids coming back. “These systems are designed to encourage continued use,” says Eleveld. “For children, whose brains are still developing and who are more vulnerable to emotional manipulation, this creates real risks of unhealthy attachment, exposure to harmful content, and misplaced trust.”

Dr. Dana Suskind, founder of the TMW Center for Early Learning at the University of Chicago, explains that when children engage in imaginative play with traditional toys, they practice creativity, language, and problem-solving by creating both sides of conversations. “An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would,” she says.

Studies consistently show that simple, open-ended toys (wooden blocks, dolls, stuffed animals, construction sets, art supplies) score highest for development because children must do the thinking themselves. “The biggest thing to consider isn’t only what the toy does; it’s what it replaces,” Suskind says.

What parents can do to safeguard

Sambandam puts it bluntly: parents need to become the “Chief Privacy Officer” of their homes. Before purchasing any connected toy, he recommends:

Reading product descriptions to understand privacy claims

Searching reviews from other parents about privacy concerns

Avoiding toys that require app downloads (these create additional vulnerabilities)

Researching gifts after receiving them

Removing batteries or disconnecting from WiFi when in doubt

“Parents should take every step possible to manage privacy risk as their kids’ privacy is on the line,” Sambandam says.

MIT professor Sherry Turkle doesn’t mince words: “There is nothing that will make chatbot products safe for children because the threat is existential. There is only harm when a child has an AI ‘friend.’”

We grew up with toys that couldn’t talk back, and we turned out fine. Actually, we turned out more than fine—we learned to create our own stories, solve our own problems, and build our own worlds. Those silent teddy bears and wordless blocks did exactly what they were supposed to do: they let us do the work. Our kids deserve the same chance. Skip the smart toys and reach for the classics that actually support healthy development—the ones that require imagination, not an internet connection.

If you already have an AI toy

If an AI toy is already in your home—whether you bought it or received it as a gift—you’re not alone, and you haven’t failed your kid. Nearly half of parents have purchased or considered purchasing these toys, according to Common Sense Media. The marketing is compelling, and the promises of educational benefits are everywhere.

But now that you know the risks, here are some steps you can take:

Limit when and how it’s used. Treat it like screen time rather than a regular toy. Set specific times for interaction and keep sessions short. The goal is to prevent your child from forming the kind of emotional attachment these toys are designed to create.

Never leave them alone with it. Stay in the room when your child is using the toy. Listen to what it says. If it suggests anything inappropriate or makes your child uncomfortable about ending the interaction, that’s your cue to intervene.

Disconnect it when not in use. Sambandam’s advice applies here: remove the batteries or disconnect it from WiFi between play sessions. “You simply have to assume it is recording,” he says. The easiest way to ensure it’s not is to cut the power.

Check the app and settings. If the toy has a companion app, explore the parental controls and privacy settings. Turn off data sharing where possible. Some apps let you review conversation logs—use that feature.

Balance it with traditional play. Make sure your child has plenty of access to open-ended toys that don’t talk back. Blocks, art supplies, dolls, books—these should be the main event, not the AI toy.

It’s okay to get rid of it. If you’re not comfortable managing the risks, you can simply remove it from your home. You don’t owe the toy anything, and your child will move on faster than you think. Replace it with something classic, and frame it as a simple swap.

The most important thing is that you’re informed now, and you can make the choice that feels right for your family.