To print this article, all you need is to be registered or login on Mondaq.com.

Article Insights

Lee Ramsay’s articles from Lewis Silkin are most popular:


with Senior Company Executives, HR and Finance and Tax Executives
in United Kingdom
with readers working within the Healthcare, Property and Securities & Investment industries

Somewhere on a leafy British road the above sign sits in quiet
absurdity. Drivers read it, ponder its existence, and accelerate
away none the wiser. It’s a sign about a sign that isn’t in
use.

But all is not as strange as it first seems. Typically, the sign
is a placeholder which indicates that the variable message sign on
a nearby gantry isn’t working yet or is being tested.

AI regulation in the UK finds itself in similar territory: there
are signs that things are afoot, but concrete signposting seems
some way off. The Financial Times recently reported that an
AI Bill isn’t looking likely in the next King’s Speech,
which is due to take place in mid-May. 

Put simply, the UK is still in testing mode on AI regulation,
trying to work out where it wants to go and how it wants to get
there.

This is why the Digital Regulation Cooperation
Forum
 (DRCF) has assumed such importance in shaping
Britain’s current approach to AI. On 10 March 2026, the forum
convened its second Responsible AI Forum in which all those
involved in AI could better understand the who, what, when, where,
why and how of UK AI rules in 2026.

What follows is our second article on the practical lessons for
senior leadership teams and general counsel who need to understand
(and thrive in) the shifting world of AI regulation.

What is the DRCF?

The DRCF launched in 2020 as a voluntary forum, to ensure a
greater level of cooperation between regulators, given the unique
challenges posed by regulation of online platforms. It brings
together four UK regulators with responsibilities for digital
oversight: the Competition and Markets Authority; the Financial
Conduct Authority; the Information Commissioner’s Office; and
Ofcom.

Agentic AI

With the recent publication of the ICO’s Tech Futures report on Agentic
AI
 we have some insight into the regulatory thinking
around this emerging technology. While the ICO work focusses on the
data protection implications of deploying AI it was interesting to
hear from Ofcom, UKAI and Responsible Intelligence about trends and
developments of agentic AI.

Accountability in the agentic AI supply chain is one senior
leaders will be pondering before embarking on any projects. It is
clear there needs to be discussion around where boundaries lie
within the agentic ecosystem and the web of relationships this
emerging tech inevitably brings. If something were to go wrong
– and there are already many examples to choose from –
how will liability be assessed and determined and how will this
reflect on your brand’s reputation as well as the bottom
line?

Place agentic AI in the consumer context and then you
significantly up the ante. Transparency and explainability will be
key to compliance but also so consumers can understand exactly what
it is they are signing up to and how they will be protected in line
with existing laws and guidance. We were directed to several
publications on agentic AI recently released by the CMA which
provide insight into the regulator’s thinking:

While agentic AI presents opportunities, senior leaders need to
step back and take time to make informed decisions. It is important
to experiment, but also build in pilot phases and extensive testing
and feedback loops before launching such a product. 

It is imperative to protect consumers from harm, whether that
harm is caused directly or indirectly, to avoid potential
exploitation, particularly for vulnerable consumers who may not
understand the possible outcome of using agentic AI, as well as
ensuring competition is across the whole AI stack so consumers are
able to give clear consent to transparent decision making with
their best interests at heart. 

If businesses don’t get this right they will likely find
themselves falling foul of existing laws and potential accusations
of manipulation, price discrimination and lack of competition in
the goods and services offered.

Consumer trust is paramount and at the moment trustworthiness in
AI, let alone agentic AI, is a huge issue. Many people are aware of
fraud and misinformation but when it comes to agentic AI, unless
there is transparency and explainability, as well as the right to
challenge decisions, it seems the uptake will not match the hype.
That said, responsible development with implemented safeguards,
allowing for genuine, freely given, informed consent might see a
very different landscape in 12 months’ time. 

Children’s wellbeing 

With the backdrop of the government’s consultation on UK
children’s digital wellbeing, covering social media age bans,
curfews, AI chatbots and gaming (for more see our article here), the DRCF hosted a session on growing up
with chatbots. 

Depending on your point of view there were some
fascinating/terrifying statistics shared by the expert panel, e.g.
one third of UK teenagers use a chatbot for an emotional
relationship, 56% of UK teenagers believe AI can think, 23% believe
AI can feel emotions and 40% have no concerns about taking advice
from an AI chatbot.

Children exhibit a high level of trust in chatbots, often
blurring the boundary between what is a chatbot and what is a
friend. While LLMs are improving all the time, they don’t have
empathy – they are merely learning how to respond to
emotional questions, and we know they don’t always get the
response right.

This is why the ICO’s Age Appropriate Design Code is so
important and for those exploring AI tools that are in scope, it is
essential to invest time and resources in getting this right. Media
and regulatory scrutiny in this area are at an all time high
globally and no-one wants to be making headlines for the wrong
reasons. 

Well-designed chatbots, that don’t process children’s
special category data, have safety by design features enabled, and
are segmented to deal with cognitive development do have a positive
role to play in children’s lives.

The discussion concluded that children and parents need to
upskill to understand the tech the children are being exposed to
and have informed conversations around its use. Companies operating
in this space need to carefully consider their legal, regulatory
and ethical obligations and ensure they always put the best
interests of the child first. This raises nuanced questions about
revenue models, creating dependency, doom scrolls etc. but that, as
they say, is for another day! 

UK Government AI roadmap

To round off the day, Mary Jones, Director of AI Strategy and
Preparedness at DSIT provided an update on the UK government’s
AI Strategy. It may come as a surprise that 75% of the AI Opportunities Action Plan is complete
one year on, with 38 out of the 50 recommendations met. 

The UK government is clear this is the foundation on which it
can now build by “looking up and looking out” to
get AI working across the economy and ensure responsible but faster
adoption. It is hoped that the five established AI Growth Zones
will be key to unlocking private investment, driving job creation
and building the required data centre capacity. An additional
£500 million worth of funding to back UK AI companies is also
in place. 

While work is underway to upskill workers it is clear the 10
million by 2030 is an ambitious target. It is hoped the appointment
of sectoral industry champions will provide impetus across various
industries and make real progress towards this goal. 

A new Future of Work unit has also been established to ensure
responsible AI adoption, while monitoring disruption and the impact
of AI on the labour market. It is also tasked with ensuring AI
boosts jobs and growth while helping workers to upskill and adapt.
A wide remit and one to watch with interest!

Responsible AI: a quick reminder

The UK hasn’t created a single AI regulator. Instead,
it’s asking existing bodies (the FCA, Ofcom, the ICO and
others) to police AI within their own patches. 

Five principles currently sit at the heart of this approach:


safety, security and robustness;

transparency and explainability;

fairness;

accountability and governance; and

contestability and redress.

Each regulator interprets the principles for its own remit.

So what?

While the DRCF highlights the importance of working
collaboratively with other digital regulators and industry, there
is still an air of waiting, wondering and the unknown. The sign
isn’t absurd, it’s there as a placeholder, which is very
much what the current state of play seems to reflect when it comes
to AI in the UK. 

Everyone is navigating this new world together and while there
are competitive pressures to realise the benefits of AI, it is
essential to heed the placeholder and know that you have your
strategy agreed, governance in place and appropriate use cases for
whichever AI tool/s you are using.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

[View Source]