The charity, one of only a few in the world licensed to actively search for child abuse content online, said it had removed 426 pieces of reported material between January and October 2025.
This was up from 199 over the same period in 2024, it said.
Its chief executive, Kerry Smith , welcomed the government’s proposals, saying they would build on its longstanding efforts to combat online CSAM.
“AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material,” she said.
“Today’s announcement could be a vital step to make sure AI products are safe before they are released.”
Rani Govender, policy manager for child safety online at the NSPCC, welcomed the measures for encouraging firms to have more accountability and scrutiny over their models and child safety.
“But to make a real difference for children, this cannot be optional,” she said.
“Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design.”