Vertical models now beat frontier models at domain-specific tasks.
Intercom built Apex, a custom model for their Fin customer service agent. Apex outperforms GPT-5.4 and Opus 4.5 on support resolution. A company with 45,000 customers and millions of conversations trained a model that surpasses the best general-purpose systems — on Intercom's specific problem.
This is [[Christensen]] disruption running its course. Frontier labs optimize for generality. Vertical builders optimize for a single domain and win where it counts.
# The Full-Stack Application specific Vertical AI Company
Durable differentiation now requires three layers:
1. the application,
2. the AI orchestration,
3. and the model itself
Companies that control all three compound their advantage: Every customer interaction generates training data. Every resolved ticket sharpens the model. For instance, the [[Azraq Data Flywheel]] pattern applies here, **proprietary data loops create moats that widen with usage.**
Open-weight models make this possible. Fine-tuning Llama costs a fraction of training from scratch. The [[Foundational Models MOC]] describes foundation models as platforms for building [[specialised models]]. Vertical builders now treat them exactly that way.
# Where the Value Moves
The [[AI Capex Super-Cycle]] flooded the market with inference capacity. Generation has become a commodity.
The constraint has shifted to **domain-specific correctness**, that is: can the model make the right call on a freight claim, an energy contract, a customer refund? **That gap between capability and operational trust is where value accrues.**
[[Agent Skills as Codified Domain Expertise]] captures the same insight from a different angle: general models code entire applications but collapse on domain-specific decisions. The [[Bottleneck Business]] is where domain expertise or the capability to do something is highly scarce, sacred almost and hard to change quickly.
Karpathy calls this **"speciation"** : One foundation model species branches into thousands of domain-adapted variants, each optimized for its niche. Cursor's Composer 2 demonstrates the pattern in coding. Intercom's Apex demonstrates it in support. [[Domain-Specific SLMs for Risk Intelligence]] describes it in infrastructure risk assessment. The pattern repeats across every vertical with sufficient proprietary data.
# Competitive Implications
[[Incumbent Bundling Risk]] warns that platform vendors ship "good enough" AI features at zero incremental cost. Vertical model builders survive by **going deeper than incumbents can.** The moat is the evaluation data, the training corpus, and the domain expertise encoded in both (not the model architecture).
This maps to [[AI era Defensibility]]: win distribution fast, then build depth. Open-source the tools, own the standard, compound the knowledge. [[Wright's Law]] should apply — every additional vertical model compresses the cost of the next one as frameworks and evaluation patterns become reusable infrastructure.
The age of one model to rule them all is ending. *The age of vertical models has arrived.*
Links:
- [[Foundational Models MOC]]
- [[Future of Foundational Models]]
- [[Domain-Specific SLMs for Risk Intelligence]]
- [[Agent Skills as Codified Domain Expertise]]
- [[Incumbent Bundling Risk]]
- [[AI Agents Stack]]
- [[AI era Defensibility]]
- [[AI Capex Super-Cycle]]
- [[Bottleneck Business]]
- [[Consultancy-to-Platform Transition]]
---
Tags: #deeptech #kp #systems