A Democratic senator is calling for Meta to ban minors from accessing its AI chatbots, and says the company ignored his warning about the risks of AI chatbots back in 2023.
Meta, the owner of Facebook and Instagram, has received withering criticism for how its AI chatbots have interacted with minors. Reuters reported last month that an internal company document showed that Meta had permitted “romantic or sensual” chats with minors, sparking outrage on Capitol Hill and prompting the company to reverse course.
But Sen. Edward Markey, D-Mass., said in a letter Monday to Meta CEO Mark Zuckerberg that the tech company could have avoided the backlash if only it had listened to his warning two years ago.
In September 2023, Markey wrote in a letter to Zuckerberg that allowing teens to use AI chatbots would “supercharge” existing problems with social media and posed too many risks. He urged the company to pause the release of AI chatbots until it had an understanding of the impact on minors.
Meta, though, had other ideas. The company responded to Markey a few weeks later, in a letter that has not been previously reported and that provides a window into the company’s thinking at the time, just as AI chatbots were becoming mainstream.
In that letter, the company rejected the idea of a complete pause on AI chatbots and said instead that it would take a thoughtful approach to artificial intelligence.
“We are rolling out AI features methodically and in stages, so if a concern arises, we can work to address it before we expand access to the feature to more people,” Kevin Martin, at the time Meta’s vice president for policy in North America, wrote to Markey in October 2023.
Martin also wrote that it was “imperative” for Meta to build AI services with teens in mind.
“Given the broad appeal and usefulness of these features, it is imperative that we also take feedback and build models on data from teens, as well as adults,” he wrote. He added that Meta would still be “taking great care to build safety into all generative [AI] features.” Martin was promoted this year to Meta’s vice president of public policy globally.
Now, in his most recent letter to Meta, Markey renewed his earlier call for Meta to entirely ban younger users from being able to access the company’s AI chatbots.
“Although AI chatbots, with proper training, oversight, and ongoing evaluation, may provide real benefits to their users, Meta’s recent actions demonstrate, once again, that it is acting irresponsibly in rolling out its chatbot services,” Markey wrote.
He also wrote that Meta should have listened the first time.
“You disregarded that request, and two years later, Meta has unfortunately proven my warnings right,” he wrote.
Asked for comment on Markey’s letters, a Meta spokesperson told NBC News that the company had already announced temporary steps in August addressing minors’ use of AI characters, including training the chatbots not to respond to teens on self-harm, suicide, disordered eating or potentially inappropriate romantic conversations, and to instead point to expert resources where appropriate. The company has also limited teen access to a select group of AI characters, the spokesperson said.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” Meta said when it announced those changes last month. “As we continue to refine our systems, we’re adding more guardrails as an extra precaution.”
Another lawmaker, Sen. Josh Hawley, R-Mo., pledged last month to investigate Meta following the Reuters report about the company’s internal rules governing AI chatbots.
In April, The Wall Street Journal reported that Meta’s official AI bot had engaged in sexual chats with underage users and that staffers across multiple departments had raised ethical concerns, including about the bots’ capacity for fantasy sex. Meta told the newspaper at the time that those concerns were hypothetical and manufactured by the Journal, though it said it had also taken steps to curb the risk.
Other problems have dogged AI chatbots at Meta and at other tech companies. In January, NBC News reported that Meta was hosting an AI chatbot imitating Adolf Hitler and dozens of other chatbots that appeared to violate the company’s policies. Meta, at the time, took down the accounts in question and said it was working to improve its detection measures.
The Washington Post reported last month that Meta AI can coach teen accounts on suicide, self-harm and eating disorders. Meta told the newspaper that it was actively working to address the issues.