**Closed vs. Open Models**
Closed AI models (OpenAI, Anthropic, Google) charge for access and generate billions, with gross margins of **50–70%**. But they burn through revenues and fresh capital retraining models to cover every edge case. Open-source models (Meta’s Llama, Mistral) can perform nearly as well, with far lower costs and more efficient scaling, though their direct revenue potential is limited.
[[Open source is often misunderstood]] [[Why Giving Away Business Models is Genius]]
**Investment Outlook**
The “smart money” strategy may be to **own both sides**: equity in the “walled cathedrals” of closed players and positions in the “open bazaar” of open models.
**AI’s Leverage on Human Labor**
What is really being acquired isn’t hardware but **human leverage**. With AI, two engineers could outperform a hundred lawyers or a large consultancy. The new advantage comes from **implementation skill, not headcount**. Old professional moats risk turning into liabilities as machines level the playing field.
**Verification as the Bottleneck**
AI capability is limited less by model size and more by **what we can verify**. For tasks with binary outcomes (math problems, code compilation, games, molecules), AI can excel. For subjective or contextual tasks (taste, judgment, personal preferences), progress is slower. Errors compound: even a 99% reliable agent falls to **60.5% accuracy after 50 sequential decisions**, since small mistakes cascade.
**Path Forward**
- In tightly verifiable domains, AI moves quickly to mastery.
- In broader human contexts, space remains for human oversight and judgment.
- Tomorrow’s edge will be measured in **“insight per token” rather than teraflops per second**.
- Multi-agent reinforcement learning verifiers are emerging to score subjective outputs—from legal writing to manufacturing workflows—turning them into feedback that models can learn from.
- This shift from static training data to **agentic learning environments** is a major directional arrow of AI progress.
- Multi-modality is further collapsing the gap between imagination and execution.
**Core Idea**
Instead of just scaling compute and model size, the deeper arbitrage lies in **making the subjective scorable**—turning human context and preference into structured, verifiable feedback for AI.