Major Insurers Say AI Is Too Risky to Cover
Insurers on both sides of the Atlantic are warning that artificial intelligence may now be too unpredictable to insure, raising concerns about the financial fallout if widely used models fail at scale.
Anxiety
As recently reported in the Financial Times, it seems that anxiety across the insurance sector has grown sharply in recent months as companies race to deploy generative AI tools in customer service, product design, business operations, and cybersecurity. For example, several of the largest US insurers, including Great American, Chubb, and W. R. Berkley, have now reportedly asked US state regulators for permission to exclude AI-related liabilities from standard corporate insurance policies. Their requests centre on a growing fear that large language models and other generative systems pose what the sector calls “systemic risk”, where one failure triggers thousands of claims at the same time.
What Insurers Are Worried About
The recent filings describe AI systems as too opaque for actuaries to model, with one, reported by the Financial Times, as saying that LLM outputs are “too much of a black box”. Actuaries normally rely on long historical datasets to predict how often a specific type of claim might occur. Generative AI has only been in mainstream use for a very short period, and its behaviour is influenced by training data and internal processes that are not easily accessible to external analysts.
The Central Fear
The industry’s central fear is not an isolated error but the possibility that a single malfunction in a widely used model could affect thousands of businesses at the same time. For example, a senior executive at Aon, one of the world’s largest insurance brokers, outlined the challenge earlier this year, noting that insurers can absorb a £300 to £400 million loss affecting one company, but cannot easily survive a situation where thousands of claims emerge simultaneously from a common cause.
The concept of “aggregation” risk is well understood within insurance. For example, cyberattacks, natural disasters, and supply chain failures already create challenges when losses cluster. However, what makes AI different is the speed at which a flawed model update, inaccurate output, or unexpected behaviour could spread across global users within seconds.
Real Incidents Behind the Rising Concern
Several high-profile cases have highlighted the unpredictability of AI systems when deployed at scale. For example, earlier this year, Google’s AI Overview feature falsely accused an Arizona solar company of regulatory violations and legal trouble. The business filed a lawsuit seeking $110 million in damages, arguing that the false claim caused reputational harm and lost sales. The case was widely reported across technology and legal publications and is now a reference point for insurers trying to price the risks associated with AI-driven public information tools.
Air Canada faced a different challenge in 2023 when a customer service chatbot invented a discount policy and provided it to a traveller. The airline argued that the chatbot was responsible for the mistake, not the company, but a tribunal ruled that companies remain liable for the behaviour of their AI systems. This ruling has since appeared in several legal and insurance industry analyses as a sign of where liability is likely to sit in future disputes.
Another incident involved the global engineering consultancy Arup, which confirmed that fraudsters used a deepfake of a senior employee during a video call to authorise a transfer. The theft totalled around £25 million. This case, first reported by Bloomberg, has been used by cyber risk specialists to illustrate the speed and sophistication of AI-enabled financial crime.
It seems that these examples are not isolated. For example, industry reports from cyber insurers and security analysts show steep increases in AI-assisted phishing attacks, automated hacking tools, and malicious code generation. The UK’s National Cyber Security Centre has also noted that AI is lowering the barrier for less skilled criminals to produce convincing scams.
Why Insurers Are Seeking New Exclusions
Filings submitted to US state regulators show insurers requesting permission to exclude claims arising from “any actual or alleged use” of AI in a product or service. In fact, some requests are reported to go further, seeking to exclude losses connected to decisions made by AI or errors introduced by systems that incorporate generative models.
W. R. Berkley’s filing, for example, asks to exclude claims linked to AI systems embedded within company products, as well as advice or information generated by an AI tool. Chubb and Great American are seeking similar adjustments, citing the difficulty of identifying, modelling, and pricing the underlying risk.
AIG was mentioned by some insurers during the early stages of these discussions, although the company has since clarified that it is not seeking to introduce any AI-related exclusions at this time.
Some specialist insurers have already limited the types of AI risks they are willing to take on. Mosaic Insurance, which focuses on cyber risk, has confirmed that it provides cover for certain software where AI is embedded but does not offer protection for losses linked to large general purpose models such as ChatGPT or Claude.
What Industry Analysts Say About the Risk
The Geneva Association, the global insurance think tank, published a report last year warning that parts of AI risk may become “uninsurable” without improvements in transparency, auditability, and regulatory control. The report highlighted several drivers of concern, including the lack of training data visibility, unpredictable model behaviour, and the rapid adoption of AI across industries with varying levels of oversight.
It seems that Lloyd’s of London has also taken an increasingly cautious approach. For example, recent bulletins instructed underwriters to review AI exposure within cyber policies, noting that widespread model adoption may create new forms of correlated risk. Lloyd’s has been preparing for similar challenges on the cyber side for years, including the possibility that a global cloud platform outage or a major vulnerability could create simultaneous losses for thousands of clients.
In its most recent market commentary, Lloyd’s emphasised that AI introduces both upside and downside risk but noted that “high levels of dependency on a small number of models or providers” could increase the severity of a large scale incident.
Regulators and the Emerging Policy Debate
State insurance regulators in the US are now reviewing the proposed exclusions, which must be approved before they can be applied to policies. However, approval is not guaranteed and regulators typically weigh the interests of insurers against the needs of businesses who require predictable cover to operate safely.
There is also a growing policy debate in Washington and across Europe about whether AI liability should sit with developers, deployers, or both. For example, the European Union’s AI Act, approved earlier this year, introduces new rules for high risk AI systems and could reduce some uncertainty for insurers in the longer term. The Act requires risk assessments, transparency commitments, and technical documentation for certain types of AI models, which could help underwriters understand how systems have been trained and tested.
The UK has taken a more flexible, sector based approach so far, although its regulators have expressed concerns about the speed at which AI is being adopted. The Financial Conduct Authority has already issued guidance reminding firms that they remain responsible for the outcomes of any automated decision making systems, regardless of whether those systems use AI.
Business Risk
Many organisations now use AI for customer service, marketing, content generation, fraud detection, HR screening, and operational automation. However, if insurers continue to retreat from covering AI related losses, businesses may need to rethink how they assess and manage the risks associated with these tools.
Some analysts believe that a new class of specialist AI insurance products will emerge, similar to how cyber insurance developed over the past decade. Others argue that meaningful coverage may not be possible until the industry gains far more visibility into how models work, how they are trained, and how they behave in unexpected situations.
What Does This Mean For Your Business?
Insurers are clearly confronting a technology that’s developing faster than the tools used to measure its risk. The issue is not hostility towards AI but the absence of reliable ways to model how large, general purpose systems behave. Without that visibility, insurers cannot judge how often errors might occur or how widely they might spread, which is essential for any form of cover.
Systemic exposure remains the central concern here. For example, a single flawed update or misinterpreted instruction could create thousands of identical losses at once, something the insurance market is not designed to absorb. Individual claims can be managed but really large clusters of identical failures can’t. This is why insurers are pulling back and why businesses may soon face gaps that did not exist a year ago.
The implications for UK organisations are significant. For example, many businesses already rely on generative AI for customer service, content creation, coding, and screening tasks. If insurers exclude losses linked to AI behaviour, companies may need to reassess how they deploy these systems and where responsibility sits if something goes wrong. A misstatement from a chatbot or an error introduced in a design process could leave a firm exposed without the safety net of traditional liability cover.
Developers and regulators will heavily influence what happens next. Insurers have been clear that better transparency, audit trails, and documentation would help them price risk more accurately. Regulatory frameworks, such as the EU’s AI Act, may also make high risk systems more insurable over time. The UK’s lighter, sector based approach leaves more responsibility with businesses to manage these risks proactively.
The wider picture here is that insurers, developers, regulators, and users each have a stake in how this evolves. Until risk can be measured with greater confidence, cover will remain uncertain and may become more restrictive. The next stage of AI adoption will rely as much on the ability to understand and manage these liabilities as on the technology itself.
Sponsored
Ready to find out more?
Drop us a line today for a free quote!