# Technical Moat Assessment Framework
When evaluating deep tech companies, test every claimed moat against three questions:
1. What commoditizes in 12-18 months?
2. What is tacit know-how vs. true IP?
3. What happens when the incumbent ships a 60% solution for free?
Component-level moat scoring for industrial AI:
**Standard (no moat):** Knowledge graphs, surrogate models, RL-based optimization, industrial data integration. All available as open-source tools, cloud services, or incumbent platform features. These are primitives, not products.
**Potentially defensible:** End-to-end pipeline automation (sensor data to deployed model without manual engineering). This is the Palantir analogy: the integration layer is harder to replicate than any individual component. Auto-generating physics models from facility data, if real, is the key to [[Deployment Velocity]] and the actual moat.
**Durable (rare):** Sovereign/political positioning in specific geographies. Customer trust built through successful production deployments. Proprietary training data from customer operations (assuming contractual rights).
The pattern: individual AI components commoditize fast. Integration and orchestration commoditize slowly. Customer relationships and political positioning don't commoditize at all.
When a startup says "our moat is adaptive AI," stress test it. Is each step standard? (Probably.) Is the orchestration of all steps genuinely automated? (Rarely.) Is it patented? (Almost never.)
Related: [[Incumbent Bundling Risk]], [[IP Strategy for Deep Tech Startups]], [[Industrial AI MOC]], [[First Principles and Mental Models MoC]]
---
Tags: #deeptech #investing #firstprinciple