Scout InsurTech Interview with Mayflower Specialty
- Michael Fiedel

- Jan 16
- 5 min read
Mayflower Specialty is an early-stage managing general agent focused on one of the most complex and fast-emerging risk categories in the insurance industry: AI liability. The company is built around the belief that traditional insurance frameworks are not equipped to address the systemic and operational risks created by advanced AI adoption. Scout InsurTech’s Michael Fiedel sat down with Founder Jeremy Epstein to learn more about how Mayflower Specialty is impacting the industry.

Who are your clients?
Our clients are companies that are actively integrating advanced AI systems into their core operations and are beginning to recognize that the risk profile of their business has fundamentally changed. These are organizations using AI beyond experimentation, where models, agents, and automated decision-making are influencing outcomes at scale. Many of them understand the upside of AI very clearly, but they are operating without a clear framework for how risk transfers alongside that adoption.
In many cases, these clients are sophisticated buyers of insurance. They already carry cyber coverage and other traditional policies, yet they are increasingly aware that those products were not designed with AI-specific failure modes, aggregation risk, or emerging liability theories in mind. Our clients are looking for a credible, insurance-native solution that acknowledges how AI actually behaves in production environments.
What does your product do?
Mayflower Specialty is building AI liability insurance from the ground up. Our product is designed to address the novel exposures created by advanced AI systems, including systemic failures, model-driven decision risk, and evolving regulatory and legal scrutiny. Rather than attempting to retrofit existing cyber or professional liability policies, we are creating coverage structures that reflect how AI risk manifests in real-world operations.
At a high level, we are providing a risk transfer mechanism that allows companies to adopt AI more confidently. Underneath that, the product is supported by underwriting models that focus on how AI systems are built, deployed, governed, and monitored. The objective is not just to insure AI, but to understand it well enough to underwrite it responsibly as the technology continues to evolve.
How much capital have you raised?
We've been fortunate to see strong investor demand from day one. Our first institutional round was oversubscribed, and we made deliberate choices about which VCs to bring in.
We'll be raising a larger round in Q2 to support growth, and those conversations are already underway. My time in the insurtech VC world means we have a shortlist.
We’ve seen strong interest from investors who view AI liability as a new insurance category rather than a short-term trend. Many of those conversations are informed by lessons learned from early cyber insurance, where capital entered the market either too late or without sufficient underwriting discipline.
Was the company born from within or outside the industry?
Mayflower was very much born from within the insurance industry. My background includes underwriting and product innovation at Nationwide, where I worked on emerging risks and the challenge of adapting legacy insurance frameworks to new exposures. That experience made it clear how difficult it would be to address AI risk using existing policy structures and underwriting assumptions.
Having spent time both underwriting and building new products, I’ve seen firsthand where traditional approaches break down. Mayflower is the result of combining that insurance experience with a deep focus on how AI technologies are actually being adopted and scaled across industries.
What growth metrics have you accomplished over the last 12 months?
Our most meaningful progress has been foundational rather than headline-driven. Over the past year, we’ve focused on building the right team, establishing disciplined underwriting standards, and selecting partners who share our long-term view of the market. We have been very intentional about not growing for growth’s sake.
It would be easy to talk about volume, policy count, or rapid expansion, but history shows that MGAs that prioritize speed over structure often struggle later. Our approach has been to grow profitably or not at all. That discipline has guided how we structure incentives, how we think about capacity, and how we engage with brokers and reinsurers.
Within your domain, what is the current challenge that the industry is facing?
The biggest challenge is understanding and managing systemic AI risk. There is significant interest across the insurance ecosystem, which is encouraging, but there is also appropriate caution. Insurers and reinsurers are asking hard questions about aggregation, vendor concentration, and correlated failures across AI systems that may rely on similar architectures or providers.
Another challenge is definitional. The industry is still aligning on what constitutes AI risk versus cyber risk, operational risk, or professional liability. Without clarity, it’s difficult to price, cap, or manage exposure effectively. These challenges require new modeling approaches and a willingness to acknowledge uncertainty rather than mask it with familiar policy language.
How does Mayflower take a unique approach to providing value?
We treat AI liability as its own category of risk. That means we are not simply extending cyber policies or layering endorsements onto existing products. We spend significant time modeling how AI systems function, how failures propagate, and where aggregation risk can emerge across a portfolio.
We also place a strong emphasis on alignment. We work closely with brokers and capacity partners who understand that this market must be built deliberately. We are not chasing irrational growth or excess capacity. Instead, we focus on transparency, data richness, and ensuring that all parties understand how risk is being selected and managed.
What inspired you to start this company?
There was a clear moment when investors and industry partners expressed concern about missing the next major insurance category, much like what happened with early cyber insurance. Hearing that perspective, combined with my own experience watching underwriters struggle to assess AI-related risk using outdated frameworks, made it clear that this problem needed a dedicated solution.
AI represents a fundamental shift in how businesses operate, and the insurance market needs to respond with equal seriousness. That realization, paired with the level of interest we were seeing from thoughtful partners, made it feel like the right time to commit fully and build something durable.
Can you share any goals for the next 12 months?
Over the next year, our focus is on continuing to refine how we underwrite, analyze submissions, and support distribution partners. We are investing heavily in how we assess data, how we communicate risk insights to brokers, and how we prepare for claims scenarios that are still largely theoretical but increasingly plausible.
Beyond underwriting, we are exploring how AI can improve our own internal operations, from submission analysis to reserve recommendations and claims support. The goal is not just efficiency, but better decision-making. We want to demonstrate that AI can be used responsibly within insurance while maintaining strong human oversight and judgment.











