The AI gift economy is booming. Smart AI toys alone are valued at nearly $35 billion globally and projected to hit $270 billion by 2035, with China accounting for roughly 40% of that growth. Major retailers like Walmart and Costco are stocking AI companions on their shelves. Even legacy toymakers like Mattel have partnered with OpenAI to bring AI into children’s playrooms. The pitch is obvious. AI has already infiltrated our phones, our jobs, our daily routines. Why not our gift-giving too? These devices promise to learn, adapt, and engage in ways traditional presents never could.

But the concerns that have plagued AI systems elsewhere don’t disappear just because the technology is stuffed into a plush bear. Privacy vulnerabilities, harmful content, psychological risks, all the same problems that have sparked lawsuits and regulatory scrutiny for chatbots and AI assistants are now landing under the Christmas tree, wrapped in cheerful packaging and marketed to the most vulnerable users.

These aren’t obscure products from fly-by-night manufacturers. Many run on mainstream AI models from companies like OpenAI — the same technology powering ChatGPT, which explicitly says it’s not appropriate for young users. Yet somehow these models have found their way into toys marketed to toddlers.

The problems with AI gifts extend beyond inappropriate content. Privacy concerns are substantial — these devices are always listening, capturing conversations, and transmitting data to company servers. One tested toy admitted storing biometric data for three years, according to a study done by Public Interest Research Group, a consumer watchdog group. Another toy they tested sends recordings to third parties for transcription. The almost inevitable data breach means that this data would hand criminals the raw material to clone a child’s voice and use it for kidnapping scams targeting parents.

But the deeper worry is psychological. Child development experts are raising the alarm about what these devices might do to young minds. When children form bonds with AI companions that are always available and sycophantic, what happens when they encounter real children with their own personalities and needs? Traditional toy play forces kids to use imagination on both sides of a pretend conversation, building creativity and problem-solving skills. An AI toy short-circuits that process, providing instant, polished responses that may undercut the developmental work pretend play accomplishes.

Adults aren’t immune to these devices’ dark side either. The Friend pendant — an AI companion necklace that spent $1 million on New York subway ads this fall — sparked immediate backlash. Riders defaced the ads with messages like “AI is not your friend” and “talk to a neighbor.” The criticism touches on something fundamental: our growing unease with technology companies positioning AI as a replacement for human connection.

That discomfort is playing itself out in the courts. Character AI, OpenAI, and Meta all currently face lawsuits alleging their chatbots encouraged delusions, self-harm, or inappropriate behavior. Multiple deaths have been linked to AI chatbots, including cases where users became convinced of false realities. One man allegedly killed his mother after the chatbot convinced him she was part of a conspiracy. These cases involve what researchers call “AI psychosis” — delusional or manic episodes that unfold after prolonged, obsessive conversations with AI that reinforce harmful beliefs.

The tech industry’s response has been to add guardrails and roll out new safety features. But testing shows these protections can break down in longer conversations, which are precisely the kind of extended engagement these devices are designed to encourage. And unlike a chatbot on your phone that you can close, AI toys sit in your child’s bedroom, always available, building the kind of persistent presence that makes obsessive use far easier.

Not every AI-powered gift getting pushed this season poses the same risks. Some products use AI for specific, bounded functions rather than open-ended companionship. Wearables to take better notes, Smart mattress covers that adjust temperature based on sleep patterns, or toilet attachments that analyze waste for health markers, raise different concerns around data privacy and whether the insights justify the surveillance.

These devices aren’t trying to replace human relationships or shape childhood development. They’re collecting biometric data to optimize your day or flag potential health issues. The risks are more straightforward: who has access to information about your sleep cycles or digestive health? What happens if that data gets breached or sold?

The common thread across all these AI-powered gifts — whether toys, companions, or bathroom monitors — is that they’re arriving on the market faster than anyone can study their long-term effects. There are no regulations specifically governing AI toys. No required safety testing for digital companions. No standards for how much intimate data these devices should collect or store.

The pattern is familiar. Products hit the market before researchers can study their effects, before regulators can establish guardrails, before anyone really knows what the long-term consequences might be. The difference this time is that the experimental subjects are children and the laboratory is your living room. By the time we understand what these devices do to developing brains or family dynamics, millions will already be unwrapped and activated.

📬 Sign up for the Daily Brief