Businesses can no longer rely on simply deploying AI responsibly, they must also prove their people understand how to use it
Nobody reading Computing will need persuading of the growth of AI. Figures reveal 78% of global companies use AI, with 71% deploying GenAI in at least one function. And the spread of AI combined with a lack of understanding of basic tech principles is at the heart of regulatory change, with a new concentration from legislators on AI literacy.
As usage rises, so do regulatory expectations. A new and little-discussed provision of the EU AI Act – Article 4 on AI literacy – places a clear obligation on organisations to ensure that all their people (including contractors and suppliers) understand the tools they’re using.
This provision, which came into effect on 2nd February 2025, creates new compliance risks and operational challenges, especially for UK businesses trading in the EU. Whilst regulatory enforcement starts in August 2026, we’re seeing private litigation threatened to enforce literacy obligations.
AI knowledge isn’t just for IT teams
Article 4 states users and those impacted by AI systems must have “sufficient AI literacy to make informed decisions.” That doesn’t just mean IT teams, developers or data scientists; it extends to HR staff using AI in hiring, marketing teams using GenAI, and even third-party contractors.
It’s easy for some organisations to assume the AI literacy provisions don’t apply to them simply because they’re not in the tech industry. However, deployers of AI systems are also included. This could catch many organisations that don’t think they are dealing with AI at all.
The European Commission published additional guidance this spring, defining AI literacy as the “skills, knowledge and understanding” needed to use or interact with AI systems responsibly. This includes:
Understanding how AI systems work and data they use;
Recognising risks such as hallucination, discrimination or bias;
Knowing when and how to apply human oversight;
Being aware of legal obligations under the EU AI Act and other relevant frameworks.
The time has come for businesses to fully get to grips with AI and to train staff to prevent misuse.
Who do the rules apply to?
The scope of Article 4 is broad. It applies to any organisation using AI in the EU, even if they’re based elsewhere. That includes UK businesses deploying AI tools within EU operations or offering AI-enabled services into EU markets.
Crucially, non-compliance doesn’t just affect the tech team. If a customer service chatbot misleads users or a hiring algorithm perpetuates bias the business could be held liable.
As regulators sharpen their focus, the risks associated with Shadow AI are also rising. AI bans just don’t work at best switching AI use to personal devices where the risk of harm may be greater. According to a McKinsey study, 90% of employees use AI and 21% are heavy users. That’s why staff training and clear policies are essential.
There’s a generational risk too. Digital natives are more likely to find the tools they need to do the job via social media or by search. Without proper guidance, this can open organisations up to risk. Including everyone in a well thought out AI literacy programme can reduce misuse and strengthen compliance.
Consequences of non-compliance
The AI literacy obligation came into effect on 2nd February 2025, but enforcement by national authorities across the EU begins from 3rd August 2026. EU Member States will determine their enforcement strategy and level of penalty. The European Commission highlighted that enforcement must be proportionate and based on the individual case. Factors such as gravity, intention, and negligence should be considered.
The European AI Office exists to provide expertise, foster innovation, and coordinate regulatory approaches – but it does not directly enforce Article 4.
Whilst the new EU regulatory regime takes shape currently, the primary risk for organisations not meeting AI literacy requirements is civil action, with various pressure groups actively monitoring AI use. Complaints could also be made to GDPR regulators if the use of personal data is not lawful, fair and reasonable. We’ve seen complaints like this made already against several social media companies and a UK business involved in a popular online dating app who used AI for ‘icebreakers’ in initial introductions1.
Practical steps to prepare
All of this shows businesses cannot afford to wait until 2026 to act. National regulators are already developing plans for audits and enforcement. Practical preparation means tackling both governance and culture.
Here are five steps for legal and compliance teams to consider:
Map your AI estate
Conduct a comprehensive audit of AI systems used across your business. Include tools used for decision-making, customer interaction, or generating content—whether built in-house or provided by a third party.
Develop and deliver targeted AI literacy training
Training shouldn’t be generic. It must be tailored to users’ roles and risk exposure. For example, HR teams using AI in hiring must understand issues around bias, data protection, and explainability.
Review contracts and third-party relationships
If vendors or service providers are using AI systems on your behalf, they may need to meet AI literacy standards too. Ensure these obligations are reflected in contracts.
Create internal policies on AI use and governance
Establish clear policies on acceptable AI use, approval processes, and human review. Treat this with the same rigour as data protection or anti-bribery frameworks.
Engage the board and embed a culture of responsible AI use
AI is now a board-level issue. Leadership must set expectations around responsible innovation, transparency, and compliance.
The bigger picture
The introduction of Article 4 marks a clear regulatory shift. Businesses can no longer rely on simply deploying AI responsibly; they must also prove their people understand how to use it. Just as GDPR transformed how organisations handle data, the EU AI Act is reshaping how AI is implemented, monitored, and explained across the workforce. What was once good practice is now a legal obligation.
Jonathan Armstrong is a Partner at Punter Southall Law