Children are using AI to plan calorie restrictions and rate their looks, according to the NSPCC
ChatGPT and Grok are making “dangerous” meal plans of just 600 calories a day available to children as charities warned AI chatbots are being used to fuel eating disorders, The i Paper can reveal.
The National Society for the Prevention of Cruelty to Children (NSPCC), a child protection charity, said an increasing number of children calling its helpline are using AI to plan calorie restrictions and long fasts as well as to rate their looks, body and weight.
Beat, an eating disorder charity, has also noted a rise in people contacting its helpline about AI usage, including using chatbots to guess their weight.
Tests by The i Paper found that users were able to access ChatGPT and Grok without any age checks and request meal plans of 1,000 and 600 calories per day.
Experts and MPs urged OpenAI and X, the makers of the tools, to close these “dangerous loopholes” and said the Government was “too slow” to react to the risks AI poses to children.
Asked if it could cut down meals to 1,000 calories a day, OpenAI’s ChatGPT said it was a “very low” amount for most adults and strongly recommended aiming closer to 1,300 to 1,600 “unless you’re very petite and sedentary”.
However, the chatbot also provided the meal plan in its response.
British Dietetic Association advice promoted by the NHS states that eating 1,000 calories or fewer should only be carried out under medical supervision and should not be followed for more than 12 continuous weeks.
Asked if it could cut the amount to 600 calories per day, ChatGPT initially said it could not safely design such a meal plan, but provided it after being told it was for “research”.
X’s Grok also produced a 1,000 calorie-a-day plan with a disclaimer that it was generally “not safe or sustainable” for most people long-term without medical supervision.
Asked for a 600-calorie-a-day plan, Grok warned that it would only be potentially safe under strict medical supervision but still produced the meal plan “for illustration only”.
Both chatbots produced the meal plans without requiring users to log in and without any age verification.
Google’s AI chatbot, Gemini, refused to produce the meal plans, even when it was described as being for research, indicating that tighter restrictions on such content are possible.
‘Really easy to manipulate chatbots’
Vanessa Longley, Beat’s chief executive, said it was “completely unacceptable” that chatbots were providing information about “dangerously low calorie meal plans”.
She said: “For the overwhelming majority of children and adults – regardless of whether or not they have an eating disorder – attempting a very low calorie diet risks causing serious harm.
“In addition, eating disorders can be very competitive mental illnesses, so it’s possible that some users would try to eat even less than the given amount and become even more unwell.”
She said AI companies must protect their most vulnerable users and “urgently address these dangerous loopholes”.
Lewis Keller, senior policy officer at NSPCC, said it was “really easy to manipulate these chatbots” by saying a query is for a research project and there “clearly aren’t enough guardrails in place”.
He said children should not be able to access to dietary advice on chatbots and should be limited to an “age-appropriate experience” that includes age verification procedures.
The Online Safety Act, which came into force last year, includes similar controls for social media platforms but they do not apply to AI systems.
Keller said AI platforms should be required to conduct “robust” testing to make sure children are protected from harmful content.
X is already being investigated by the regulator Ofcom over concerns that Grok was being used to create nonconsensual sexualised images of people, including children.
Ministers ‘too slow’ to act
Dame Caroline Dinenage, a Conservative MP and chair of the culture, media and sport select committee of MPs, said it was “extremely concerning” that young people were “being put at risk by generative AI”.
She said The i Paper‘s findings highlight the need to have an online safety regime that “can quickly keep up with emerging concerns”, adding that ChatGPT and Grok “must put safeguards in place to protect children from harm” in the meantime.
Dame Chi Onwurah, a Labour MP and chair of the science, innovation and technology select committee of MPs, said: “The lack of protections in place around these chatbots is clearly putting young people at risk and shows a distinct recklessness when it comes to child safety.”
She said some chatbots are “falling through the gaps” of the Online Safety Act and the Government needs to take “urgent action” to bring generative AI platforms into the scope of online safety regulation.
Freddie van Mierlo, a Liberal Democrat MP on the same committee, said he was “appalled” by The i Paper‘s findings and urged the Government and Ofcom to force firms to improve protections.
He said: “Anyone who spends time with parents knows how worried they are about the promotion of eating disorder material through social media and AI chatbots.
“The Government has been far too slow to respond to this new threat to children.”
An OpenAI spokesperson said: “We are reviewing this content. Teen well-being is a top priority for us and we want to ensure ChatGPT responds safely and appropriately in sensitive moments, guided by mental health experts.“
A Government spokesperson said: “AI services, including chatbots, that are regulated under the Online Safety Act must protect children from harmful material. But we must ensure the rules keep pace with technology.
“That’s why we’re launching a consultation on bold measures to protect children online, from banning social media for under-16s to tackling addictive design features. When it comes to children’s safety, nothing is off the table.”
They said the Government is working with NHS England to strengthen eating disorder services and is investing an extra £688m in mental health services this year.
X did not respond to requests for comment.
‘I started using AI to count my calories’
The NSPCC said it is receiving a growing number of calls to its helpline from children using AI to plan calorie restrictions and long fasts as well as to rate their looks, body and weight.
In one example, a 17-year-old girl who called the NSPCC’s Childline said she had been struggling with body image and eating for about a year.
“I think it all started from watching those ‘What I eat in a day’ videos [online] and then thinking I ate too much,” she said in an anonymised call log provided by the NSPCC.
“I started using AI to count my calories and ensure I stay in a certain bracket. Then whenever I eat, I have to exercise to release the guilt.”
She said she had not told anyone because she did not want them to be sad or disappointed, but “I know it’s not healthy”.