ReutersReuters

AI definitions a challenge for liability insurers' exclusions

Refinitiv4分程度で読めます

By Henry Gale

(The Insurer) - As commercial liability insurers variously consider excluding artificial intelligence or generative AI risks, The Insurer considers how these terms are being defined and what they mean for policyholders.

The definitions insurers are using in early AI exclusions reveal what technologies they believe could be increasing their exposures.

The Insurer’s analysis shows there is no consistent approach across insurers, and some definitions are not even internally consistent, with carriers not yet quite sure exactly what technology they are worried about.

WHAT COUNTS AS AI

Common definitions of AI refer to machines performing tasks that are typically associated with human levels of intelligence. This is a moving target: as people become accustomed to technology doing cleverer things, our view of what tasks require human-level intelligence changes.

Legal and contract definitions of AI tend to take a different approach. The OECD, a group of 38 countries, reached agreement in 2023 on a definition for an “AI system” to be used in legislation and regulation.

WR Berkley, which has filed one of the broadest AI exclusions with U.S. state regulators, uses a similar definition (though it defines “artificial intelligence”, not an “AI system”) in its exclusion. Differences from the OECD definition are highlighted in bold.

In the context of businesses’ operations and their risks, this is a relatively wide definition. A statistical model might even fulfil the criterion of being a “machine-based system that … infers, from the input it receives, how to generate output”.

It extends far beyond the chatbots, image generators and other recent technological advances that have been the focus of corporate hype around AI over the past three years.

The main change from the OECD definition is to stress that AI includes systems that can “generate derived synthetic content” or, in other words, the GenAI tools such as ChatGPT that can create text, images and other content. The wording is similar to a definition of GenAI used by the White House in a 2023 executive order on AI.

GENERATIVE OR NOT GENERATIVE?

An exclusion filed by Cincinnati also combines elements of the OECD AI definition with the White House GenAI definition. Differences from Berkley’s version are highlighted in bold.

Berkley’s definition makes clear that technologies meeting the White House’s definition of GenAI are included “without limitation”. In other words, AI does not have to be generative to be covered by this definition.

However, Cincinnati Financial’s does not say that. By tagging “in order to generate derived synthetic content” on to the end of the OECD definition, it appears to be narrowing its scope to GenAI alone.

This raises several questions – why start from the OECD’s definition designed to incorporate all forms of AI? Why add a phrase at the start that uses broad language about “any technology that allows machines to mimic human thinking or learning”? And why define the term “artificial intelligence” rather than “generative AI”?

Cincinnati told The Insurer it did not have information to share beyond what was included in the filings, while Berkley did not immediately respond to a request for comment.

Other definitions also appear muddled. An exclusion filed by Frederick Mutual, though it clarifies that generative tools are not the only AI it applies to, uses a circular definition. It defines AI as “anything that uses artificial intelligence”.

In a subsequently withdrawn filing, Philadelphia Insurance bafflingly defines GenAI not as a technology, system or software, but as the content generated by an AI tool.

This appears even more bizarre when inserted into the context of the relevant policy wording.

Despite being a definition of GenAI, it is also not clear on whether content created by non-generative AI tools could fall under its scope too.

Frederick Mutual and Philadelphia did not immediately respond to requests for comment on these wordings.

WHAT THIS MEANS IN PRACTICE

The variations and contortions of these definitions reflect early attempts to grapple with how policyholders’ use of certain technologies could be affecting liability risks. Targeting the right technologies is proving challenging.

As The Insurer’s analysis set out on Monday, those who implement broad exclusions should prepare for a backlash from brokers and clients.

Carriers may gravitate towards the category of GenAI to respond to risks arising from the advances from the last three years – such as copyright infringement, defamation and hallucinations in generated content – without sweeping in earlier, more established technologies.

But GenAI-targeted exclusions would not necessarily encompass other high-profile AI risks, such as discrimination caused by biased algorithms used in hiring or lending software.

Verisk’s Core Lines Services, which provides standardised policy language for insurers, introduced optional exclusions specifically covering GenAI risks for commercial liability policies in a multistate filing made in July.

The Insurer asked Joseph Lam, vice president of general liability, core lines at Verisk, why the new exclusions applied only to GenAI.

“Artificial intelligence has been in use for decades for various operations,” Lam said. “As more companies begin to incorporate more AI into their operations, products and services, the potential liability risk is expected to continue to expand.”

He continued: “As AI technology continues to evolve, we heard from several of our customers that generative AI tends to present unique risks, and this led us to decide to develop these exclusions.”

Rather than rely on the White House’s or another existing regulatory formulation, Verisk’s exclusion uses what appears to be an original definition of GenAI. The Insurer could not identify a definition that closely resembled it elsewhere, but Verisk’s filing said it was “generally based on existing definitions provided by government sources and NAIC definitions”.

It’s still early days. Although these exclusions have been submitted to regulators, they have not necessarily made it into very many, if any, policies. But with many state regulators having approved Verisk’s exclusions due to come into force in January 2026, excluding certain GenAI risks will become an option for many carriers from next year.

On Wednesday, The Insurer will consider what types of losses the exclusions are being applied to in more detail as we continue our week of in-depth coverage on the industry’s early AI exclusions. You can catch up on Monday’s article here.

ログイン、もしくは永年無料のアカウントを作成して、このニュースを読みましょう