Major insurers are seeking to exclude artificial intelligence risks from corporate policies, as companies face multibillion-dollar claims that could emerge from the fast-developing technology.
AIG, Great American and WR Berkley are among the groups that have recently sought permission from US regulators to offer policies excluding liabilities tied to businesses deploying AI tools including chatbots and agents.
The insurance industry’s reticence to provide comprehensive cover comes as companies have rushed to adopt the cutting-edge technology. This has already led to embarrassing and costly mistakes when models “hallucinate” or make things up.
One exclusion WR Berkley proposed would bar claims involving “any actual or alleged use” of AI, including any product or service sold by a company “incorporating” the technology.
In response to a request from the Illinois insurance regulator about the exclusions, AIG said in a filing generative AI was a “wide-ranging technology” and the possibility of events leading to future claims will “likely increase over time”.
AIG told the Financial Times that, although it had filed generative AI exclusions, it “has no plans to implement them at this time”. Having approval for the exclusions would give the company the option to implement them later.
WR Berkley and Great American declined to comment.
Insurers increasingly view AI models’ outputs as too unpredictable and opaque to insure, said Dennis Bertram, head of cyber insurance for Europe at Mosaic. “It’s too much of a black box.”
Even Mosaic, a speciality insurer at Lloyd’s of London marketplace which offers cover for some AI-enhanced software, has declined to underwrite risks from large language models such as ChatGPT.
“Nobody knows who’s liable if things go wrong,” said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, an AI insurance and auditing start-up.
These moves come amid a growing number of high-profile AI-led mistakes. Wolf River Electric, a solar company, sued Google for defamation and sought at least $110mn in damages, after claiming its AI Overview feature falsely stated the company was being sued by Minnesota’s attorney-general.
Meanwhile, a tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up.
Last year, UK engineering group Arup lost HK$200mn (US$25mn) after fraudsters used a digitally cloned version of a senior manager to order financial transfers during a video conference.
Aon’s head of cyber Kevin Kalinich said the insurance industry can afford to pay a $400mn or $500mn loss to one company that deployed agentic AI that delivered incorrect pricing or medical diagnoses.
“What they can’t afford is if an AI provider makes a mistake that ends up as a 1,000 or 10,000 losses — a systemic, correlated, aggregated risk,” he added.
AI hallucinations typically fall outside standard cyber cover, which is triggered by security or privacy breaches. So-called tech “errors and omissions” policies are more likely to cover AI mistakes, but new carve-outs could narrow the scope of the coverage offered.
Ericson Chan, chief information officer of Zurich Insurance, said when insurers evaluated other tech-driven errors, they could “easily identify the responsibility”. By contrast, AI risk potentially involves many different parties, including developers, model builders and end users. As a result, the potential market impact of AI-driven risks “could be exponential”, he said.
Some insurers have moved to clarify legal uncertainty with so-called “endorsements” — an amendment to a policy — of AI-related risk. But brokers warn these require close scrutiny because in certain cases this has resulted in less cover.
One endorsement by insurer QBE extended some cover for fines and other penalties under the EU’s AI Act, considered the world’s strictest regime regulating the development of the technology.
Recommended
But the endorsement, which other insurers have since mirrored, limited the payout for fines stemming from the use of AI to 2.5 per cent of the total policy limit, according to a large broker.
QBE told the Financial Times it was “addressing the potential gap [in AI-related risk] that may not be covered by other insurance policies”.
In broker negotiations, Zurich-based Chubb has agreed to terms that would cover some AI risks, but has excluded “widespread” AI incidents, such as a problem with a model that would impact many clients at once. Chubb declined to comment.
Meanwhile, others have introduced add-ons covering narrowly defined AI risks — for instance, a chatbot going haywire.
Insurance brokers and lawyers said they feared insurers would start fighting claims in court when AI-driven losses significantly increase.
Aaron Le Marquer, head of insurance disputes team at law firm Stewarts, said: “It will probably take a big systemic event for insurers to say, hang on, we never meant to cover this type of event.”
