Today, insurance across health, life and property lines remains among the least trusted sectors. Recent research shows that a myriad of challenges–from climate-related risks to increasing cybersecurity threats and growing geopolitical tensions–is putting pressure on existing business models, making it harder to build trust.
In 2024, weather-related disasters resulted in US$368bn in economic losses, with 60 percent left uninsured and exposed to the impact of climate change. Against this backdrop, the rise of AI is poised to transform the industry, prompting insurance executives to ask whether the technology can go beyond productivity gains to address these systemic issues and build trust.
AI could make insurance affordable for hundreds of millions of people in the developing world by reducing administrative overheads. Some estimate that generative AI could automate 25-40 percent of labour time across most functional roles. Already, generative AI is boosting productivity, cutting coding times by 30-50 percent and putting information at the fingertips of agents, brokers and customer representatives. Ultimately, these efficiency gains could translate into lower premiums and make insurance products more accessible.
Automation is also reshaping the customer experience: a US-based insurer says it handles around 40 percent of its claims instantly through its AI system, allowing customers to receive payouts within seconds. Insurance representatives at some firms now use ChatGPT to draft more than 50,000 customer emails per day, generating empathetic and clear messages that reduce the back-and-forth after a claim submission.
Yet when trust in the industry is low, customers may prefer speaking with a representative to receiving emails that sound less genuine, as though they were written by a bot. As employees become more productive with AI tools, they can also spend more time deepening client relationships and offering personalised advice.
AI can also drive innovation through real-time risk analysis. For example, insurers can deploy AI to issue early warnings to their customers, encouraging them to take preventative action. This can reduce both climate-related losses and claim disbursements; and the savings are sizable: some estimate that using AI to mitigate hazards and reduce vulnerabilities could save US$70bn in direct disaster costs globally by 2050. Additionally, AI can help underwrite cyber risks by estimating the likelihood and impact of such events, where currently 99 percent of potential losses are assessed to be uninsured.
Nonetheless, the technology brings risks of its own. Large language models are prone to ‘hallucinations’ and, without humans in the loop, could misprice risks or mishandle claims. In the absence of proper guardrails, these models can also leak confidential customer data or intellectual property and amplify bias in underwriting or claims.
At the organisation level, many firms face operational and technical challenges with the technology. For some insurers, IT systems date back 40 years and are incompatible with modern AI systems. Integrating new tools requires more than just capital expenditure: it demands a culture that encourages brokers, claims managers and underwriters to use AI in a responsible manner. That requires revising internal policies and upskilling staff.
Navigating a complex and fragmented regulatory framework poses another challenge for insurers. In the EU, data-protection laws discourage or prevent insurance companies from using sensitive customer information—like biometric and medical data—for underwriting. These measures are vital to maintain consumers’ privacy and reduce discrimination but they can also slow the pace of innovation. Similarly, regulators in the US require insurance to demonstrate that their product does not cause harm, a standard that can pose methodological challenges and costs for insurers.
Despite its potential, AI adoption remains uneven across the insurance industry. For example, insurtech firms specialising in cyber insurance have integrated AI in their IT infrastructure, often treating it as a prerequisite. In contrast, many incumbents still rely on legacy systems that hinder the adoption of such tools. Progress also varies across departments: use cases such as fraud detection and software development have seen faster uptake and clearer returns, while other applications lag behind.
Yet broader deployment of AI does not guarantee better customer outcomes. If these tools are employed primarily to cut costs and increase profits—rather than improve coverage or fairness—or if they are implemented without proper guardrails, they risk deepening the very trust gap they have the potential to bridge. The backlash faced by UnitedHealth, after an AI algorithm allegedly denied coverage to a patient who later died, illustrates how misuse can trigger legal scrutiny, regulatory action and public outrage.
While its adoption is not without obstacles, the technology presents an opportunity to address many of the underlying issues that undermine confidence in the sector, ultimately benefiting both insurers and policyholders. If applied responsibly, AI could help make insurance coverage more affordable, tailor policies to individual needs and strengthen societal resilience against emerging risks such as climate change and cyber threats.


