The use of chatbots is growing incredibly fast. Data from the advice and research group Internet Matters says the number of children using ChatGPT in the UK has nearly doubled since 2023, and that two-thirds of 9-17 year olds have used AI chatbots. The most popular are ChatGPT, Google’s Gemini and Snapchat’s My AI.
For many, they can be a bit of fun. But there is increasing evidence the risks are all too real.
So what is the answer to these concerns?
Remember the government did, after many years of arguments, pass a wide-ranging law to protect the public – particularly children – from harmful and illegal online content.
The Online Safety Act became law in 2023, but its rules are being brought into force gradually. For many the problem is it’s already being outpaced by new products and platforms – so it’s unclear whether it really covers all chatbots, or all of their risks.
“The law is clear but doesn’t match the market,” Lorna Woods, a University of Essex internet law professor – whose work contributed to the legal framework – told me.
“The problem is it doesn’t catch all services where users engage with a chatbot one-to-one.”
Ofcom, the regulator whose job it is to make sure platforms are following the rules, believes many chatbots including Character.ai and the in-app bots of SnapChat and WhatsApp, should be covered by the new laws.
“The Act covers ‘user chatbots’ and AI search chatbots, which must protect all UK users from illegal content and protect children from material that’s harmful to them,” the regulator said. “We’ve set out the measures tech firms can take to safeguard their users, and we’ve shown we’ll take action if evidence suggests companies are failing to comply.”
But until there is a test case, it’s not exactly clear what the rules do and do not cover.