# AI Disruption Risk Is Not Uniform — Thoma Bravo Framework
> Source: Thoma Bravo (2025). Framework shared on AI disruption risk across software categories.
## The Framework
Thoma Bravo splits software into two risk categories:
### More at Risk — Fast, Forgiving, Low-Stakes Environments
- Generalist knowledge
- Simple, or low-stakes workflows
- Point solutions or standalone tools
- Higher tolerance for mistakes
- Low cost of failure
- Light regulatory oversight
- Commoditised or publicly available data
### More Insulated — High-Stakes, Complex, Regulated Systems
- Deep, specialised domain expertise
- Complex, critically important workflows
- Embedded platforms
- No room for mistakes *(see disagreement below)*
- High cost of failure
- Stringent regulation & compliance requirements
- Proprietary data or proprietary know-how
---
## My Take
The framework is one of the cleaner maps I've seen on where AI disrupts software first. Most of it holds.
**Where I agree:** Proprietary data, deep domain expertise, embedded workflows, regulatory validation timelines, and high cost of failure are genuine moats. These are durable because they're structurally hard to replicate or displace — they took years to build, they compound, and they require institutional trust that doesn't transfer easily.
**Where I push back: "No Room for Mistakes" is not a moat.**
It's an environment description, not a competitive protection. And more importantly — it's exactly where the highest-capability AI will go first, because the ROI of AI precision over human precision is greatest there.
Surgical planning, compliance checking, financial risk modelling, drug discovery. These aren't insulated because they demand precision. They're attracting the most ambitious, best-funded AI efforts *because* the value of getting it right is so high.
Precision requirements don't protect incumbents. They attract disruption at the highest level of capability.
The actual question is: **does the organisation have data no one else has, expertise no model can replicate from public training sets, and workflows so embedded that a rip-and-replace is a multi-year programme?** That's what holds. Not precision.
---
## Where to Build
Not just "regulated and complex" — that's necessary but not sufficient.
The durable space is at the intersection of:
- **Operational data that doesn't exist publicly** — generated through proprietary processes, accumulated over years
- **Multi-stakeholder workflows** — where embedding requires institutional alignment, not just technical integration
- **Contextual judgment that doesn't generalise** — decisions shaped by institutional history, edge cases, and domain-specific nuance that a general model won't have
Software that lives here earns the right to stay. Software that only lives in the "high-stakes" bucket without the above is at risk the moment a sufficiently capable and trusted AI system enters.
---
## Implications for Building
The Thoma Bravo framework is most useful as a **where to land** map, not a **why you're safe** map. Being in a regulated, high-stakes environment is the starting condition. It's not the moat itself.
Build toward the moat sequentially:
1. Land in a regulated, complex environment (reduces early competition)
2. Accumulate proprietary operational data (compounding advantage)
3. Embed deeply into workflows (switching cost)
4. Encode domain expertise into your system (not just your team)
See: [[AI era Defensibility]] for the motte-and-bailey framing on how this sequencing works.
---
## Related
- [[AI era Defensibility]]
- [[Land-and-Expand in Enterprise AI]]
- [[Industrial AI Unit Economics]]
- [[Bespoke Engineering in Industrial AI]]
- [[Sovereign AI Positioning]]
Tags: #investing #deeptech #AIstrategy #defensibility