The use of AI in insurance: When will the hype become help?

David Gerber, Lucy Terracall and Fred Hawke
21 Feb 2023
Time to read: 5 minutes

ChatGPT offers much potential to insurers and insureds, but the industry will need to proceed cautiously as the technology matures.

Just last month Agency Checklist, a US-based publisher of insurance news asked ChatGPT, "How will AI change the Property & Casualty insurance industry over the next 10 years?" Apparently, ChatGPT told them the following (after listing several ways in which artificial intelligence (AI) is expected to transform the industry):

"Overall, the use of AI in the property and casualty insurance industry is expected to increase efficiency, improve risk assessment and underwriting, and reduce the cost of insurance for consumers."

We decided to ask ChatGPT the same question, but specifically by reference to generative AI. The response from ChatGPT was as follows:

"Generative AI, which involves using machine learning algorithms to create new content, is likely to have a significant impact on the Property & Casualty (P&C) insurance industry over the next decade. Here are a few potential ways that this technology could transform the industry:

  • Improved risk assessment: Generative AI can help insurers better predict and assess risks by analyzing large volumes of data and creating simulations of potential scenarios. This could help insurers identify potential claims before they occur, which could save them money and improve their overall risk management.
  • Enhanced underwriting: Generative AI can assist underwriters by analyzing vast amounts of data and creating personalized insurance policies based on a customer's specific risk profile. This could lead to more accurate pricing and coverage decisions, which could improve customer satisfaction and reduce claims costs.
  • Streamlined claims processing: Generative AI can help insurers streamline the claims process by automating certain tasks, such as document review and data entry. This could reduce the time it takes to process claims and improve the overall customer experience.
  • Fraud detection: Generative AI can help insurers detect fraud by analyzing patterns in data and identifying potential instances of fraudulent activity. This could reduce the amount of fraudulent claims paid out by insurers and improve their overall profitability.
  • Customized marketing: Generative AI can help insurers create personalized marketing campaigns based on customer data, such as demographics, purchasing history, and online behavior. This could help insurers better target potential customers and improve their overall marketing ROI.

Overall, generative AI has the potential to transform the P&C insurance industry in a number of ways, from improving risk assessment and underwriting to streamlining claims processing and detecting fraud. As this technology continues to evolve, it will be interesting to see how insurers adapt to these changes and leverage generative AI to stay ahead of the competition."

These are bold claims from the AI-powered chatbot which itself is not actually “intelligent”. Although colloquially referred to as an AI tool, ChatGPT is a GPT (Generative Pretrained Transformer) large language model which adopts deep learning techniques to generate human-like responses to text inputs in a conversational manner. It is generally accepted that content produced by ChatGPT can be factually unreliable.

In responding to this question, ChatGPT had not creatively thought of new ways in which AI may support the insurance industry. It simply accessed available data and generated an answer to the question posed. Nevertheless, ChatGPT demonstrates the most recent technological leaps in data analytics and has prompted discussion about the AI advances to come. So when will we move past the AI hype to a position in which AI can help insurers and insureds? The answer may well be "soon, but…"

What ChatGPT might be able to do in insurance

The insurance underwriting process critically depends on data and statistics. The more that an underwriter knows about a risk scenario, the better able they are to statistically analyse the probability and magnitude of potential loss and price the risk. AI promises to increase by orders of magnitude the efficiency of collation, synthesis and transmutation of vast quantities of raw, relevant data, into forms useable by underwriters.

In terms of risk knowledge and presentation, therefore, AI is potentially a game changer. We have already seen legislative change reducing the duty of disclosure, for domestic and consumer lines insurance, to a duty merely to take reasonable care to avoid making a misrepresentation. For some classes of insurance, the increased accessibility of publicly available information which AI offers may well make the duty of disclosure redundant altogether.

There is a lot said regarding the use of AI technology in communicating with customers, and insureds are no exception. There is certainly scope for AI technology to perform basic interactions with customers at the time that they take out their policies or wish to make a claim. This may include generating policy and claim documents, and answering questions about the application or claim process.

Finally, there is growing recognition that the ability to analyse data and detect patterns may, as claimed by ChatGPT, provide a useful tool for insurers in the detection of fraud. This could be of real benefit to the industry – not just insurers, but also insureds whose premiums may be reduced if insurers can reduce the amount that they charge by virtue of claims that are not valid.

All of this may happen soon. And then there is the "but".

The limits of ChatGPT (at the moment)

RGA, a reinsurance company, recently identified the following limitations that ChatGPT and other language models have in the context of insurance. According to RGA:

"These models:

  • Are trained on large amounts of text from the internet, which may contain biases and can result in the model reproducing biased language and stereotypes
  • Lack common sense knowledge, which can lead to nonsensical or unrealistic responses
  • May struggle with tasks that require knowledge of specific subjects or industries
  • Are not designed to perform tasks that require reasoning and logical thinking, such as deduction and induction
  • Lack empathy and cannot understand and respond emotionally to the context

ChatGPT only accesses written text at the time it was trained. It is not capable of utilizing tabular data, diagrams, or data sources created after its last training."

With those shortcomings, overhyping the current state of technology risks misleading insurers and insureds alike.

And why the human insurance underwriter isn’t going anywhere soon

It is unlikely that open system AI technology will ever be able to advise an intending insured, just what matters they need and need not disclose to their insurer in order to comply with their duty of disclosure, at least in Australia. The reason is that if there is risk-relevant information that is confidential to the insured, the AI will not know of that and will not be able to include it in its consideration of the question. If the matter in question is "of common knowledge", or is a matter "that the insurer knows or in the ordinary course of the insurer's business as an insurer ought to know", including presumptively through its own use of AI facilities for underwriting purposes as foreshadowed above, then the information will be accessible to the AI but the insured has no obligation to disclose it (Insurance Contracts Act 1984 (Cth) subsections 21 (2) (b) & (c)). Of course, it's possible that AI may advance to the point where it can provide the insured with a full legal analysis and advice, informed by all relevant legislation and case-law, regarding the operation of the duty of disclosure in the context of the insured's affairs, but we are not there yet.

From the perspective of the insurer, the value of the data analytical capability of AI technology is substantial but not determinative. If you ask an underwriter whether their discipline is a science or an art, they will likely say that it is a bit of both. We expect that insurance brokers and underwriters will not lightly abandon the human interactions, including with their insureds, which can be so important to gain confidence in those controlling or managing the risks being underwritten and to identify the so-called "moral hazard" with which insurers are often concerned.

Could we progress to a world where home and motor vehicle insurance policies are applied for by a customer, underwritten by an insurer and generated entirely through an AI technology platform without any human input? Yes, we expect that may well be not far away. Much of the debate in that regard is likely to be centred around the insurer and their regulator's oversight of technology-driven processes and the controls required to make them robust. Life insurance may follow relatively shortly after, as insurers become increasingly reliant on publicly available health and other data relevant to their prospective insureds, to the exclusion of disclosures required from those humans themselves.

The more complex commercial policies, including directors' & officers' liability insurance and professional indemnity insurance, traditionally require complex underwriting which is not readily replaced by technology – AI or otherwise. However, there is much that can be done to help. As always, there will be sceptics and detractors but, as with most advancements of technology throughout history, the overall net effect is likely to be positive.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.