
When a burst pipe floods a kitchen or a fender-bender turns into a costly claim, people rely on insurance to help sort out the chaos. But now, a different kind of disruption is moving through the industry—one born not from cracked ceilings or icy roads, but from the servers of Silicon Valley.
Generative artificial intelligence, the same kind of technology powering ChatGPT and other language models, is swiftly making its way into the world of underwriting, customer service, and claims. And the industry finds itself at a tipping point. According to a new IBM report, insurance executives are nearly split down the middle: 51% see generative AI as an opportunity; 49% call it a risk.
Yet few are standing still. The same report found that 77% of insurance leaders feel the need to adopt generative AI quickly to stay competitive.
Already, early adopters are seeing signs of success. Insurers who’ve deployed generative AI in customer-facing systems report a 14% higher customer retention rate and a 48% bump in Net Promoter Score. Some are embedding generative AI across multiple sales channels—direct, agent-based, and through banks—yielding improvements in both customer acquisition costs and satisfaction.
“Insurers are walking a tightrope,” IBM’s researchers wrote, “between rapidly building new gen AI capabilities and managing gen AI risk and compliance.”
That tightrope includes the looming threat of inaccurate or biased outputs, cybersecurity lapses, and the widening chasm between what companies think customers want and what they actually do.
A Mismatch in Expectations
One of the report’s more striking findings is the disconnect between insurance providers and their policyholders. While executives prioritize chatbots, augmented service, and developer productivity, customers place higher value on personalized pricing, tailored product recommendations, and transparent use of their data.
Insurers that can close the expectation gap may leap ahead, while those that misread consumer sentiment risk being left behind.
And for many customers, the concerns are concrete: inaccurate information, invasive data practices, and opaque decision-making processes. These issues are especially fraught when it comes to core coverage areas like health, property, or general liability insurance, where a denied claim or flawed underwriting can have major financial consequences.
But even the most ambitious strategies must contend with an often-overlooked hurdle: old code. The back-end systems that underpin most insurers’ operations were not designed for generative AI, or anything close to it. Executives point to “technical debt”—a buildup of outdated or incompatible software—as a primary drag on innovation.
More than half of insurance leaders say their data is inadequate, inaccessible, or incomplete, slowing their speed to market. These constraints limit what large language models can learn from and how accurately they can function.
To break free of these bottlenecks, some insurers are turning to hybrid-by-design architectures, which blend legacy systems with newer, more modular technologies. The goal is to modernize without losing the institutional memory or data that traditional platforms hold.
Who Holds the Steering Wheel?
Then comes the question of governance. Should generative AI development be centralized, steered by a core team, or decentralized across the organization?
According to IBM, insurers experimenting with decentralized approaches—where AI decision-making is spread across teams but governed centrally—are seeing faster product rollouts and better customer metrics. That balance may prove key.
The stakes are growing along with the market. A recent report by Allied Market Research projects that the global generative AI in the insurance sector will balloon from $761 million in 2022 to $14.4 billion by 2032. That’s a staggering growth rate of over 34% annually.