Scout InsurTech Spotlight with Timo Loescher
- Michael Fiedel

- Dec 16, 2025
- 5 min read
Timo Loescher is the Insurance Lead for North America at Quantexa, the global data, analytics, and AI software company pioneering Decision Intelligence to help insurers, banks, and public sector organizations use connected data to fight fraud, reduce risk, grow revenue, and make faster, more confident decisions. Timo was interviewed by Michael Fiedel, Co-Founder at Scout InsurTech and Co-Founder at PolicyFly, Inc.

Timo, what is driving the disconnect between the hype around GenAI in insurance and the limited success most companies have seen in real deployments?
“If you look back over the past few years, you will find consistent narratives from consulting firms like McKinsey, Deloitte, and Accenture predicting massive operational transformation from GenAI, particularly across core insurance functions such as distribution, underwriting, and claims. These areas contain exactly the type of manual and knowledge heavy work that seems ideal for AI driven improvement.
However, the reality of what insurers are deploying is very different. Most real-world implementations are horizontal tools designed to work across any industry, which means they only scratch the surface of insurance’s real complexity. These are tools that summarize meeting notes, transcribe conversations, brainstorm ideas, or condense documents. They are useful, but they remain shallow. They help with minor tasks, not with the deep, insight-heavy work that affects financial performance or operational efficiency.
The real opportunity comes from verticalized GenAI that deeply engages with an insurer’s own data. This includes transactional history, enrichment data, customer behavior, relational patterns, and signals that reveal what tends to happen after certain events. When you understand how customers behave, what correlates with conversion, or which relationships matter, AI can begin guiding workflows in ways that materially impact outcomes. The gap between hype and results is really the gap between generic tooling and deeply integrated, domain-specific AI.”
Why is vertically integrated GenAI so critical to achieving meaningful results? And what makes it so challenging for insurers to build these systems?
“There are a few central reasons. First, insurance is a regulated industry. Any move toward automation in underwriting or claims raises legitimate concerns about governance, accountability, and model risk. When you introduce agent driven workflows, you must be clear about where business rules come from, the data they are made against, who owns them, and how they connect to organizational policy. You cannot delegate decisions to AI unless you have clear traceability and confidence in the inputs, outputs, and logic.
Second, there is the matter of trust. Anyone who has used GenAI has seen it hallucinate. Underwriters and claims professionals notice immediately when an AI response conflicts with a trusted source or includes outdated or inaccurate information, not just casting doubt on that response but all prior and future responses from the tool. If the system is not grounded in reliable data, users will not trust it.
This leads to the third point. The most effective solution is to rely on internal data that is already validated and well understood. That often leads orgs to look at using smaller models trained primarily on proprietary data or connecting directly into internal systems rather than depending solely on public web data. This dramatically reduces hallucinations and supports better governance.
But insurers rarely have a unified data environment. Data is scattered across products, lines of business, enrichment feeds, and legacy systems. Mergers, new offerings, market expansions, and technology changes all create fragmentation. Master data management programs work hard to keep up, but it is a tough challenge, and they only master specific domains rather than joining up all data contextually. Vertical GenAI demands reliable, well-integrated data, and insurers are still building that foundation."
How should companies balance automation and human oversight to build trust and reliability in AI-driven decisions?
"First of all, AI is not likely to replace large portions of the insurance workforce anytime soon. Ignoring for a second the regulatory risks of fully automating decisions, there is simply more than enough work to go around. Underwriters face more submissions than they can review, claims teams face more volume than they can process, and customer experience has plenty of room for improvement. This first wave of AI is not about eliminating people. It is about helping them focus on higher value work.
So what should be automated versus what should remain under human review? Many people fall back on the mantra of “automate the easy tasks and leave complex tasks to humans.” But it is more nuanced than that.
With the right data integrations and an AI agent which can explain its reasoning and provide traceability back to source systems, even complex decisions can be driven through full automation. But regulations, ethics considerations, and the need to preserve professional judgment argue against that, especially in the most complex situations.
There is also a practical issue; if you automate all simple work, you remove the training ground where early career professionals build good decision frameworks before progressing to more complex work. Without that foundation, they will struggle to validate AI recommendations or handle edge cases.
A more sustainable approach is decision augmentation across the board. AI should gather context, assemble data, highlight patterns, and surface explainable recommendations. Humans should still review and decide. The broader goal is to support better decisions, not to remove people from the loop. Ultimately, this context-led augmentation approach enables you to power thoughtful education programs even if you end up automating more and more low-complexity work over time."
Can you share a concrete example of a successful GenAI implementation that shows the value of connecting internal and external data to improve business outcomes?
"One strong example comes from the banking sector, although the model translates very well to insurance.
A large global bank we work with have relationship managers who oversee customer engagement across personal banking, business banking, wealth management, lending, and more. Historically, all those systems were siloed. They use Quantexa to ground their GenAI solutions in a deeply integrated, unified, and contextualized data environment.
The bank created a system where external signals, such as corporate directorship changes, enrichment data updates, or inquiries about opening an account, trigger an AI agent to gather and unify data across all silos. It uses advanced matching logic to create a knowledge graph of every entity connected to the customer. This includes businesses they are associated with, other directors, household members, vehicle history, and various financial relationships.
The agent then analyzes the graph to identify cross sell and upsell opportunities, churn risks, service needs, and recommended next actions. All of this is delivered directly into the relationship manager’s CRM - all real time. It includes context, source traceability, and clear explanations. The human still makes the final decision, but with far better decision intelligence.
The results have been impressive, including roughly a fifty percent improvement in conversion across upsells, cross sells, and new prospecting. The system does not automate decisions. It automates the gathering and contextualizing of both internal and external data at a scale no individual could manage manually.
Insurance can absolutely apply this model. Whether the focus is underwriting, claims, fraud, or distribution, the central challenge is the same. Once you unify and contextualize fragmented data and ground GenAI in that data, meaningful value becomes achievable very quickly."











