# An AI-Native Hedge Fund: A First-Principles Map *A strategic map for building a concentrated, conviction-driven, AI-native fund in the tradition of Ackman and Burry — with a systematic overlay.* > [!tldr]- TL;DR — the 7 things to hold in your head > 1. A hedge fund is a machine that converts **capital + information → risk-adjusted return**. AI doesn't change the definition; it collapses the cost of the conversion function. > 2. **AI-native ≠ AI-assisted.** AI-native means agents call agents (and occasionally humans), not the reverse. > 3. The **Ackman/Burry archetype** (concentrated, long-horizon, fundamental) is uniquely AI-amplifiable — edge comes from research depth, not reaction speed. > 4. Edge lives in: **synthesis-at-scale**, **memory coherence**, **asymmetric patience**, **counterfactual reasoning**, **longitudinal behavior tracking**. Edge is NOT in speed, volume, or standard factor alpha. > 5. The fund is a directed graph of ~11 stages (universe → origination → research → construction → execution → monitoring → exit → post-mortem → ops). Most stages agent-ify; a few are human-gated. > 6. Defensibility compounds with time through **accumulated institutional memory** (post-mortems, pattern extraction, calibration). See [[Defensibility Principles MOC]] and [[AI era Defensibility]]. > 7. Build the proprietary layer; assemble the commodity layer below. --- ## Map of Contents > Click any entry to jump to the section. **[[#Part I — Foundations]]** - [[#1. What a hedge fund actually is|1. What a hedge fund actually is, stripped down]] - [[#2. What "AI-native" means|2. What "AI-native" means (and what it doesn't)]] - [[#3. Why now — the 2026 state of the world|3. Why now — the 2026 state of the world]] - [[#4. The archetype we're building around|4. The archetype we're building around]] **[[#Part II — First Principles]]** - [[#5. The five sources of investment edge|5. The five sources of investment edge]] ★ *atomic candidate* - [[#6. The fund as a machine|6. The fund as a machine — investment lifecycle as a graph]] ★ *atomic candidate* - [[#7. Where humans are bottlenecks today|7. Where humans are bottlenecks today]] - [[#8. Where AI fundamentally changes the equation|8. Where AI fundamentally changes the equation]] **[[#Part III — Core Components]]** - [[#9. The Capital Layer|9. The Capital Layer]] - [[#10. The Data Layer|10. The Data Layer]] - [[#11. The Reasoning Layer|11. The Reasoning Layer]] - [[#12. The Execution Layer|12. The Execution Layer]] - [[#13. The Risk Layer|13. The Risk Layer]] - [[#14. The Memory & Reflection Layer|14. The Memory & Reflection Layer]] ★ *highest-leverage layer* - [[#15. The Governance & Compliance Layer|15. The Governance & Compliance Layer]] **[[#Part IV — The Core Agents]]** - [[#16. Origination agents|16. Origination agents]] - [[#17. Research agents|17. Research agents — the AI analyst team]] - [[#18. Macro & context agents|18. Macro & context agents]] - [[#19. Construction agents|19. Construction agents]] - [[#20. Monitoring agents|20. Monitoring agents]] - [[#21. Execution agents|21. Execution agents]] - [[#22. Reflection agents|22. Reflection agents]] - [[#23. Meta-agents|23. Meta-agents — the supervisors]] **[[#Part V — Where the Edge Comes From]]** - [[#24. Synthesis-at-scale edge|24. Synthesis-at-scale edge]] ★ *atomic candidate* - [[#25. Memory & coherence edge|25. Memory & coherence edge]] ★ *atomic candidate* - [[#26. Asymmetric patience edge|26. Asymmetric patience edge]] - [[#27. Counterfactual reasoning edge|27. Counterfactual reasoning edge]] - [[#28. Longitudinal behavior edge|28. Longitudinal behavior edge]] ★ *atomic candidate* - [[#29. Where the edge is NOT|29. Where the edge is NOT]] **[[#Part VI — Architecture & Implementation]]** - [[#30. Reference architecture|30. Reference architecture]] - [[#31. The orchestration problem|31. The orchestration problem]] - [[#32. Human-in-the-loop design|32. Human-in-the-loop design]] - [[#33. Failure modes and defenses|33. Failure modes and defenses]] - [[#34. Infrastructure stack|34. Infrastructure stack]] **[[#Part VII — Day in the Life]]** - [[#35. New idea to position|35. New idea to position]] - [[#36. Drawdown, unwind, LP cycle|36. Drawdown, unwind, LP cycle]] **[[#Part VIII — Strategic Questions]]** - [[#37. Build vs. assemble|37. Build vs. assemble]] - [[#38. Single-model vs. multi-model|38. Single-model vs. multi-model]] - [[#39. What is the GP actually doing?|39. What is the GP actually doing?]] - [[#40. The defensibility question|40. The defensibility question]] **[[#Atomic Note Candidates]]** · **[[#Related Notes in Vault]]** · **[[#Sources]]** > [!abstract]- Index of diagrams > - [[#Diagram 1 — The AI-native gradation]] → §2 > - [[#Diagram 2 — The fund as a machine]] → §6 > - [[#Diagram 3 — The agent workforce]] → Part IV > - [[#Diagram 4 — Edge map (uniqueness × durability)]] → Part V > - [[#Diagram 5 — Reference architecture]] → §30 > - [[#Diagram 6 — The defensibility stack]] → §40 --- # Part I — Foundations > [!note]- Part I at a glance > Strip the fund to its minimal definition, distinguish AI-native from AI-assisted, and name why the 2026 moment is different. Then anchor the archetype we're building around. ## 1. What a hedge fund actually is A hedge fund is a machine that converts **capital + information → risk-adjusted return** for a fee. Everything else — Bloomberg terminals, prime brokerage, 2-and-20, Greenwich real estate — is implementation detail. The implementation detail matters (legal structure, LP alignment, gating, risk controls) but the minimal definition is what tells you what you're *actually* competing on: the conversion function. Every fund is a bet about which inputs to source, how to process them, and how to turn processed information into positions. Two-Sigma bets on scaled signal processing. Bridgewater on macro systematization. Pershing Square on concentrated activist conviction. Scion on forensic contrarianism. The **AI-native** bet is that the conversion function itself has become dramatically cheaper and more scalable — and that a new configuration of fund becomes possible because of it. ## 2. What "AI-native" means Three gradations, in increasing order of nativeness: - **AI-assisted** — analysts use ChatGPT in a browser. Workflow is human-led. 95% of hedge fund employees do this today. Table stakes, not edge. - **AI-integrated** — firm has built internal tooling: RAG over research, transcript DBs, multi-step "blueprints." Bridgewater's AIA Labs and Man Group's Alpha Assistant are sophisticated versions. Agents fetch, synthesize, summarize. Humans decide. - **AI-native** — agents are first-class investment professionals. They own theses, make recommendations with confidence levels and invalidation criteria, argue in structured debate. Humans focus on the highest-leverage decisions. The architectural distinction: in AI-integrated funds, **humans call agents**. In AI-native funds, **agents call agents, and occasionally call humans**. > [!info] What AI-native does NOT mean > Not a "trading bot." Not RL on price data. Not alt-data pipelines feeding ML models. Those are 2015 ideas. AI-native in 2026 means *agentic reasoning systems with memory, tool use, specialization, and structured debate* across the full investment lifecycle. See [[AI agents]] and [[AI Agents Stack]]. A good sobering counterweight: [[The Misleading Allure of Anthropomorphizing AI]]. ### Diagram 1 — The AI-native gradation ```mermaid flowchart LR A["<b>AI-assisted</b><br/>Humans use tools<br/><i>95% of funds today</i>"] B["<b>AI-integrated</b><br/>Humans call agents<br/><i>Bridgewater AIA · Man Group</i>"] C["<b>AI-native</b><br/>Agents call agents<br/><i>Altbridge · this doc</i>"] A --> B --> C style A fill:#ffe4e1,stroke:#c94f4f,color:#000 style B fill:#fff4d6,stroke:#c99a2e,color:#000 style C fill:#d4f1d4,stroke:#2e8a3a,color:#000 ``` > The step-change is not at A→B (which is where most of the industry lives in 2026). It's at B→C, where the human is no longer the orchestrator. ## 3. Why now — the 2026 state of the world Four changes in the last eighteen months made this viable: 1. **Reasoning models became reliable enough.** Frontier models with extended thinking and tool use crossed the threshold where they can execute research workflows that previously required a junior analyst. Not perfect — but a reliable building block. See [[Large Language Model - LLMs]] and [[Architecture of LLM]]. 2. **Agentic frameworks stopped being toys.** LangGraph, Claude Agent SDK, OpenAI Agent SDK, CrewAI, TradingAgents — all matured through 2025. Multi-agent with debate and memory is now the default pattern. See [[AI Frameworks]] and [[Autonomous AI Agents - The rise, potential and challenges]]. 3. **Data became agent-readable.** Filings, transcripts, patents, alt-data, expert calls — increasingly standardized and queryable. Analyst headcount was the synthesis bottleneck. It isn't anymore. 4. **Compute got cheap enough for deep research.** A 20-step reasoning chain on a mid-cap costs ~$50. A fund doing this for 500 names/year spends $25K on inference — a rounding error. See also: [[The AI Timeline - Navigating the Road Ahead]], [[AI usage is now a baseline expectation]], [[Foundational Models MOC]]. ## 4. The archetype we're building around This document assumes a specific archetype: **concentrated, conviction-driven, fundamentally-researched, long-horizon, with a systematic overlay for screening and risk.** The mental models are Pershing Square ([[Bill Ackman]]) and Scion (Burry), with systematic elements borrowed from Renaissance and Two Sigma — for screening, portfolio construction, and risk decomposition, not for pricing alpha. Why this archetype, specifically: - **Concentration compounds with depth.** A 500-position fund cannot go deep on any name. A 10-15 position fund *must*. Agents reduce the marginal cost of depth from weeks of analyst time to hours of compute — which amplifies the concentration advantage rather than diluting it. - **Edge is research depth, not signal speed.** Ackman/Burry don't win on latency. They win on forensic depth and narrative reframing. AI is terrible at reaction-speed edge and extraordinary at depth-synthesis edge. - **Long horizons are forgiving of model imperfection.** A research agent that misreads a footnote gets corrected by a second agent or a human before a multi-year position is sized. Horizon absorbs AI fallibility. - **Decision cadence fits human oversight.** 5-15 decisions/year can all have deep human review without becoming a bottleneck. - **Systematic overlay fills qualitative blind spots.** Factor crowding, regime shifts, correlation — a hybrid is better than pure fundamental or pure systematic. [↑ Back to Map](#map-of-contents) --- # Part II — First Principles > [!note]- Part II at a glance > The five sources of edge, the fund as a graph, where humans are the bottleneck today, and what AI structurally changes. ## 5. The five sources of investment edge > [!idea] Atomic note candidate > This decomposition is reusable across investing, startup evaluation, and competitive strategy. Worth extracting as its own atomic note. Five sources. Exhaustive, mutually exclusive: 1. **Informational** — data others don't have. Mostly dead in public markets (small, fleeting, legally fraught). 2. **Analytical** — same data, better synthesis. Where fundamental investing has always lived. Where AI changes the calculus most. 3. **Behavioral** — others are forced into predictable mispricing (forced selling, benchmark-hugging, herding). If you can tolerate what they can't, you get paid. 4. **Structural** — your capital has properties others' doesn't (permanent, unconstrained, unbenchmarked). About fund design, not being smarter. 5. **Temporal** — you can wait longer than the market. The most underrated and hardest to harvest — requires LPs who tolerate patience. An AI-native fund in the Ackman/Burry mold harvests primarily **analytical, behavioral, and temporal** edge. AI directly amplifies analytical, indirectly amplifies the other two via memory coherence. ## 6. The fund as a machine > [!idea] Atomic note candidate > The 11-stage decomposition is a reusable mental model for any capital allocator. Worth its own atomic note. Every dollar a fund makes or loses flows through a directed graph of stages. ### Diagram 2 — The fund as a machine ```mermaid flowchart LR A["a. Universe<br/>definition"] --> B["b. Origination"] B --> C["c. Preliminary<br/>research"] C --> D["d. Deep research"] D --> E["e. Bear case<br/>construction"] E --> F["f. Position<br/>construction"] F -. "🧑 HUMAN GATE" .-> G["g. Execution"] G --> H["h. Monitoring"] H -. "trigger" .-> I["i. Exit"] I -. "🧑 HUMAN GATE" .-> J["j. Post-mortem"] J -. "feeds memory" .-> B J --> K["k. Capital & ops"] classDef agent fill:#d4e8ff,stroke:#2e6fc9,color:#000 classDef human fill:#fff0d6,stroke:#c98a2e,color:#000 classDef memory fill:#e8d4ff,stroke:#6f2ec9,color:#000 class A,B,C,D,E,H agent class F,G,I human class J,K memory ``` > Blue = agent-native. Orange = human-gated (5-15/year — concentrated cadence fits human oversight). Purple = memory/ops — the reflection loop that feeds tomorrow's research. A traditional fund staffs each stage with humans. An AI-native fund reconceives the graph as a system where agents perform most stages and humans perform *specific gates*. The design question is not "how do we automate each stage" but "which stages benefit from agent-ification, which require human judgment, and how do the handoffs work?" ## 7. Where humans are bottlenecks today - Analyst deep-dive throughput: 15-20/year. Agent equivalent: 150-200. - Qualitative context capacity: 10-30 names before depth degrades. Agent capacity: all of them, with perfect recall. - Written theses degrade. Agent-stored theses don't. - Bear cases are under-argued because analysts get attached (commitment bias). Agents have no ego. - Post-mortems are rare because nobody has time or likes writing them. - Longitudinal management tracking is almost never done systematically. - Hidden correlations across positions get noticed only when they hurt. Each is an agent-shaped opportunity. ## 8. Where AI fundamentally changes the equation Three categories of change: - **Throughput.** Reading every 10-K, proxy, 8-K footnote, transcript, expert call, patent filing, regulatory comment letter — for every company in a universe of thousands — now scales to one human overseeing a set of agents. Synthesis bandwidth jumps two orders of magnitude. - **Cognitive.** Agents hold perfect memory, don't tire, don't anchor, don't fear drawdowns, don't protect egos. They argue a bear case as hard as a bull case if structured to do so. See [[How AI Is Rethinking the Way We Reason]]. - **Structural.** Cost base collapses. A traditional fund running 10 analysts spends $5-15M/yr. An AI-native fund with equivalent research capacity spends $200K-$2M on compute and a fraction of the headcount. This changes what fund sizes, LP bases, and fee structures are viable. Related: [[AI Eats Services Not Software]]. The naive conclusion: AI-native funds will crush traditional funds. The careful conclusion: the first wave will have real edge; the second wave will have edge that decays; the third will need to find new edges (see [[#Part V — Where the Edge Comes From]]). [↑ Back to Map](#map-of-contents) --- # Part III — Core Components > [!note]- Part III at a glance > Seven first-principles layers. The Capital and Execution layers are unchanged; the Data, Reasoning, Risk, Memory, and Governance layers are where the fund is actually designed. ## 9. The Capital Layer Unchanged in form, transformed in reporting. LP/GP structure, fund terms, gating — all unchanged. You still need admin, audit, prime broker, counsel. None of this is novel. See [[VC Fund Performance Metrics]] for metric definitions. What changes is **what LPs get**: personalized, interactive reporting, live thesis status per position, queryable decision journals. The risk is transparency theater — LPs who feel informed but don't engage. Design accordingly. ## 10. The Data Layer Four sub-layers: - **Structured financials.** EDGAR, global exchange feeds, fundamentals (Compustat, Factset, Xignite). Commoditized, but needs reliable access. - **Unstructured primary.** Every 10-K, 10-Q, 8-K, proxy, transcript, comment letter, globally. Patents, clinical trials, FCC/FTC/EU filings, court filings. The raw words. This is where analytical edge lives. - **Alternative.** Satellite, shipping, credit card, web traffic, job postings, app downloads, scraped pricing. Increasingly commoditized. - **Expert/human.** Network transcripts (Guidepoint, Third Bridge, AlphaSights), sell-side, management access logs, conferences. Architectural choice: knowledge graph + RAG hybrid is converging as the pattern. Normalized structured store + entity-event graph + high-quality RAG over raw text. See [[knowledge graphs]] and [[Knowledge Graphs for Industrial Data]]. Related: [[data lakes]], [[modern data stack]]. > [!warning] Don't over-index on exotic alt-data > A concentrated fundamental fund does not live or die on slightly better credit card data. It lives or dies on depth of analysis per thesis. Don't confuse data spend with edge. ## 11. The Reasoning Layer Where the interesting architecture lives. Three pieces: - **Agent definitions.** Specialized workers (see [[#Part IV — The Core Agents]]). Each has system prompt, tools, memory scope, methodology. See [[AI Agents Stack]] and [[AI Frameworks]]. - **Memory architecture.** Per-thesis, per-name, per-theme, firm-level. Durable, indexed, versioned so you can see how thinking evolved. Related: [[Context Window]], [[Context Layers MOC]], [[in-context learning]]. - **Orchestration.** The conductor. Early funds: workflow engine (LangGraph, Temporal). Later: more dynamic, agents pulling agents in real-time. The reasoning layer is the fund's bespoke IP — the replacement for "what makes our analysts special" in a traditional fund. ## 12. The Execution Layer For a concentrated long/short fund, execution minimizes information leakage and market impact on meaningful positions. Algo execution via prime broker is table stakes; dark pools, RFQs, options for stealth accumulation are situational. AI's contribution here is modest: liquidity-aware position sizing, adversarial footprint analysis, execution counterfactuals. Do not over-engineer. Execution will not make or break this fund. ## 13. The Risk Layer Risk is multi-scale: - **Position-level thesis risk.** Continuous disconfirmation, catalyst tracking, surprise response. Agents shine here. - **Portfolio-level risk.** Factor exposures, correlation, concentration, liquidity. Standard tooling (Barra, Axioma, or PCA over factor universe). - **Tail risk.** Scenario testing against 2000, 2008, 2020, 2022 + novel scenarios from a macro agent. - **Counterparty & operational.** Prime broker, cash sweep, custodian, key-person, model risk. The unsexy stuff that actually kills funds. > [!important] The PM must not be the only check on risk > That's been fatal for concentrated funds (LTCM, Archegos). Concentration amplifies errors. The risk agent is a structural ally, not a bureaucratic drag. ## 14. The Memory & Reflection Layer The most underrated component and the one most likely to produce durable edge. Four sub-systems: - **Decision journal.** Every decision captured at the moment of decision, not reconstructed. Trivial with agents; impossible manually. Related: [[090 Personal Knowledge Management - PKM]], [[Information Evolution, Personal Knowledge & Collective Intelligence]]. - **Post-mortem agent.** Triggered on every exit. Structured retrospective: original thesis, actual outcome, correct for right reasons vs. wrong reasons, skill vs. luck. Inspired by [[2025 Year-End Reflection - What Landed, What Didn't]]. - **Pattern extraction.** Runs across the post-mortem corpus looking for systematic patterns (sizing bias, sector blind spots, exit timing). - **Anti-drift.** The original thesis is stored verbatim. Every re-check forces comparison to current rationale. Material divergences are flagged. This layer is **how the fund gets better over time**. Without it, you're doomed to repeat mistakes and rediscover successes. This compounds for the life of the fund — genuine structural advantage. > [!idea] Atomic note candidates from this section > **Thesis Drift** (as a named failure mode) and **Decision Journal as First-Class Output** are both worth standalone atomic notes. ## 15. The Governance & Compliance Layer Unglamorous, essential. - **Audit trail.** Every agent action, prompt, output, human override — logged and searchable. - **Model versioning.** Every prompt + weight combination versioned. Backtests replayable deterministically years later. - **Compliance monitors.** 13F, 13D, short selling, insider windows, restricted lists, mandate limits. Non-negotiable gates. - **Human overrides logged.** Over time, this is a dataset about where human judgment adds value. See [[LLM-as-Judge]] and [[AI Verification]] for the verification pattern. - **Policy governance.** Who can change prompts? Change risk limits? Approve a data source? In a traditional fund "policy" is dusty docs. Here it's code. Treat it as such. [↑ Back to Map](#map-of-contents) --- # Part IV — The Core Agents > [!note]- Part IV at a glance > The workforce. ~20 agent types across origination, research, macro, construction, monitoring, execution, reflection, and supervision. Each has role, tools, memory, handoff protocols. Foundational reference: [[AI agents]], [[AI Agents Stack]], [[Autonomous AI Agents - The rise, potential and challenges]]. ### Diagram 3 — The agent workforce ```mermaid flowchart TB O(["🎯 Orchestrator<br/>routes · handoffs · review gates"]) subgraph Orig["🔍 ORIGINATION"] direction TB O1["Multi-factor screener"] O2["Event surveillance"] O3["Insider activity"] O4["Positioning"] O5["Contrarian/distressed"] O6["Narrative"] end subgraph Rsch["📊 RESEARCH"] direction TB R1["Fundamentals"] R2["Accounting forensics"] R3["Management quality"] R4["Competitive dynamics"] R5["Customer/supplier"] R6["Regulatory/legal"] R7["Valuation"] R8["🐂 Bull case"] R9["🐻 Bear case"] end subgraph Macro["🌐 MACRO"] direction TB MA1["Rate regime"] MA2["Sector rotation"] MA3["FX / commodity"] MA4["Geopolitical"] end subgraph Const["🏗️ CONSTRUCTION"] direction TB C1["Position sizing"] C2["Hedge construction"] C3["Instrument selection"] C4["Entry plan"] end subgraph Mon["👁️ MONITORING"] direction TB M1["Thesis monitor"] M2["Catalyst tracker"] M3["Surprise response"] M4["Drawdown protocol"] end subgraph Refl["🧠 REFLECTION"] direction TB RF1["Decision journal"] RF2["Post-mortem"] RF3["Pattern extraction"] RF4["Calibration"] end subgraph Meta["🛡️ META — SUPERVISORS"] direction TB MT1["Consistency checker"] MT2["Source verifier"] MT3["Budget governor"] end O --> Orig O --> Rsch O --> Macro O --> Const O --> Mon O --> Refl Meta -. "oversees" .-> Rsch Meta -. "oversees" .-> Const Meta -. "oversees" .-> Mon R8 <-. "structured debate" .-> R9 ``` > The orchestrator is the only top-level controller. Specialists work in parallel per idea; Bull and Bear agents run adversarial debate on every thesis; Meta-agents never act — they *audit* (source check, consistency flag, compute budget). ## 16. Origination agents Job: surface names and situations that deserve attention. - **Multi-factor screener.** Value/quality/momentum/revision screens with anomaly flagging. - **Event-driven surveillance.** 8-Ks, proxy fights, M&A, litigation, FDA, management changes, spin-offs. - **Insider activity.** Form 4 parsing, unusual cluster buys, buying into weakness, CFO selling before misses. - **Positioning.** Short interest, borrow cost, options positioning, 13F smart-money shifts. - **Contrarian/distressed screener.** Burry-shaped — names where something terrible happened recently and the question is whether it's *actually* terrible. - **Narrative agent.** Reads the cycle, maps narratives to companies, tracks sentiment evolution. Output: ranked queue of opportunities, each with type, initial hypothesis, advance/pass recommendation. ## 17. Research agents The analyst team. Works in parallel on each new idea. - **Fundamentals analyst.** Reads filings, builds unit economics, frames the question: what does this earn in a normal environment? - **Accounting forensics.** Burry-style. Beneish M, Altman Z, Dechow accruals, earnings management indicators (channel stuffing, capitalized costs, inventory vs. revenue, DSO, cash vs. accounting earnings, off-balance-sheet). Footnote obsessed. - **Management quality.** Longitudinal — every transcript the CEO/CFO has ever given, at this company and prior ones. Tracks promises vs. delivery. Reads comp design, related parties, insider behavior patterns. - **Competitive dynamics.** Porter's Five Forces, live. Reads competitor filings, tracks share, pricing, entrants, substitution. - **Customer/supplier.** Expert transcripts, reviews, app ratings, survey, social. Outside-in view. - **Regulatory/legal.** Court filings, comment letters, lobbying disclosures, pending legislation. - **Valuation.** DCF, comps, SOTP, reverse DCF, asset floor. Base/bull/bear with explicit assumptions. - **Bull and Bear case agents.** Adversarial. Structured debate. Outputs explicit variant perception: *what do we believe that the market does not, and why are we right?* > [!idea] Atomic note candidate > **Bull/Bear Structured Adversarial Debate** — the TradingAgents empirical finding that structured debate beats single-agent analysis is a transferable concept worth extracting. ## 18. Macro & context agents Concentrated funds are not macro funds, but not macro-blind either. - **Rate regime.** Policy trajectory, curve shape, real rates, inflation expectations. - **Sector rotation.** Leadership, relative strength, earnings revision trends. - **FX & commodity.** For positions with material foreign revenue or commodity input exposure. - **Geopolitical.** Trade policy, sanctions, elections, regulatory shifts. Maps to position-level risk. Context providers, not decision-makers. ## 19. Construction agents - **Position sizing.** Kelly-adjusted (with a significant haircut), correlation-aware, liquidity-constrained. - **Hedge construction.** Pairs, sector shorts, collars, vol hedges. Explicit rationale. - **Instrument selection.** Common, preferred, convertible, warrants, options — driven by thesis shape. - **Entry plan.** Pacing, channels, max price, guardrails. ## 20. Monitoring agents The work after the position is on. Biggest gap in most funds. - **Thesis monitoring.** Continuous re-assessment per position. Green/yellow/red status. - **Catalyst tracking.** Timeline + evolving probability. - **Surprise response.** Material event → first-pass assessment within minutes. - **Drawdown protocol.** Forces deliberate re-evaluation during drawdowns rather than reflexive response. ## 21. Execution agents Order routing, impact estimation, TCA. Mechanical. Important but not differentiating. ## 22. Reflection agents (See also [[#14. The Memory & Reflection Layer]].) - **Decision journal agent.** Captures at moment of decision. - **Post-mortem.** Structured retrospective on every exit. - **Pattern extraction.** Runs across the corpus. - **Calibration.** Stated probabilities vs. actual outcomes. Are we well-calibrated at 80%? 50%? ## 23. Meta-agents Agents supervising agents. - **Orchestrator.** Top-level conductor. Routes work, manages handoffs, decides when a thesis is ready for human review. - **Consistency checker.** Flags material disagreement between agents. Doesn't resolve it — surfaces for explicit resolution. - **Hallucination & source verifier.** Every factual claim must be traceable to source. Non-negotiable trust mechanism. See [[LLM-as-Judge]], [[RAG-based verification]], [[AI Verification]]. - **Budget governor.** Inference isn't free. Prioritizes compute by expected value (probability × size × investigation cost). [↑ Back to Map](#map-of-contents) --- # Part V — Where the Edge Comes From > [!note]- Part V at a glance > The architecture is table stakes. The edge is what makes the fund worth owning. Five real edges. What it's NOT. ### Diagram 4 — Edge map (uniqueness × durability) ```mermaid quadrantChart title Edge by uniqueness to AI-native × durability x-axis "Commodity" --> "Unique to AI-native fund" y-axis "Short-lived / arb'd away" --> "Compounds over time" quadrant-1 "Durable and differentiated (moat)" quadrant-2 "Durable but shared" quadrant-3 "Arb'd away" quadrant-4 "Head start, then erodes" "Memory and coherence": [0.85, 0.90] "Longitudinal behavior": [0.82, 0.78] "Asymmetric patience": [0.45, 0.75] "Counterfactual reasoning": [0.65, 0.55] "Synthesis-at-scale": [0.55, 0.35] "Sentiment signals": [0.28, 0.12] "Data volume alone": [0.22, 0.15] "Speed / latency": [0.12, 0.10] "Factor alpha": [0.10, 0.08] ``` > Upper-right quadrant is where a real fund wants to live. Lower-left is where dead funds live. Synthesis-at-scale is an immediate edge that migrates *down* as competitors copy. Memory-coherence and longitudinal behavior migrate *up* as the fund compounds. ## 24. Synthesis-at-scale edge > [!idea] Atomic note candidate > **Synthesis-at-Scale** as a distinct edge concept is worth an atomic note — applies beyond investing (legal discovery, medical research, intelligence analysis). Clearest and most immediate. A human analyst reads ~3 10-Ks/week carefully. A research agent reads 3,000. When synthesis of qualitative data is the bottleneck — and in fundamental investing it is — removing it produces *actual* information others don't have. Specific forms: every earnings transcript a management team has ever given across companies and quarters; every 10-K footnote for the universe; every litigation/regulatory/patent/lobbying trail for covered names; every expert network transcript across the industry, not just the 2-3 an analyst has time for. Trajectory: as more funds deploy synthesis agents, this erodes. Within 3-5 years it'll be table stakes. Edge migrates to *what* you read (proprietary corpus) and *how* you synthesize (proprietary reasoning). Related: [[Observed Exposure — Anthropic's AI Penetration Metric]]. ## 25. Memory & coherence edge > [!idea] Atomic note candidate > **Memory & Coherence as Edge** is a subtle, durable, and uniquely-AI concept worth extracting. The most underrated source of edge. Humans have imperfect memory. Analysts forget exact rationale, exact sizing logic, exact invalidation criteria. They rationalize positions they should cut because emotional memory replaces analytical memory. Agents maintain perfect memory. Every thesis, assumption, invalidation criterion stored and surfaced on demand. This removes a huge category of human error: **thesis drift**. Second-order: **coherence over time**. Human PMs' views drift with news cycles, market mood, recent outcomes. A fund that holds coherent theses across 3-5 years — without drift — operates with a different conviction profile than its peers. A structural advantage for long-horizon investors. And it compounds: the longer the fund runs, the bigger the memory advantage. Links to [[Defensibility Principles MOC]] ("Switching Costs" and "Sequential Defensibility" translate directly — the longer the fund's memory, the less substitutable it becomes). ## 26. Asymmetric patience edge Related to memory. The ability to hold a thesis through poor performance *when the thesis is right*. Human PMs get fired, have career risk, rationalize, shorten horizons during drawdowns. An AI-native fund with disciplined structural design (LP alignment, lockups, stable governance) can hold longer *for the right reasons*. Agents don't panic; they update when evidence updates, not when mood updates. Caveats: requires LPs who tolerate volatility (an LP-selection decision). Patience without a correct thesis is slower ruin. The edge is *coupling* patience with rigorous disconfirmation. ## 27. Counterfactual reasoning edge Agents can run counterfactuals at human-impossible scale. For every position: 1,000 variations across assumptions (growth, margin, multiple, competitive response, regulatory outcome). Identify which assumptions the position is most sensitive to. Explicitly quantify: if X moves by Y, return moves by Z. Different from traditional sensitivity analysis — agents reason about non-linear interactions and qualitative scenario mixtures, not just spreadsheet cells. *"What if management is replaced and the new CEO is a cost-cutter? What if consumer behavior shifts by 20%? What if the primary competitor is acquired by someone irrational?"* Result: much better-calibrated conviction. You know *why* you're right, under what conditions, and under what conditions you'd be wrong. ## 28. Longitudinal behavior edge > [!idea] Atomic note candidate > **Longitudinal Behavior Tracking** is a specific, uniquely-AI edge that almost nobody is systematically harvesting in public markets. A specific form of analytical edge uniquely enabled by AI. When a new CEO takes over a target company, a longitudinal agent immediately reads every transcript, letter, and statement that CEO has made at their prior employers. Parses promises and delivery. Identifies strategic priors. Produces a calibrated forecast of how they'll run this company. No human analyst has time to do this for every name on a watchlist. An agent does. Same applies to boards (governance patterns), auditors (prior restatements), CFO-hiring patterns (internal promote vs. turnaround specialist). Almost entirely unexploited in public markets today. Scales trivially with agents. ## 29. Where the edge is NOT - **Speed.** HFT is a game for Citadel, Jane Street, Renaissance. Don't play. - **Data volume alone.** Bottleneck is synthesis, not volume. Pay for data only with a specific usage thesis. - **Standard factor alpha.** Value/momentum/quality/low-vol — arbitraged. Competes on implementation cost, not discovery. - **Simple LLM sentiment signals.** Anyone with an API key can do this. If your output is a sentiment score, you haven't built a fund, you've built a feature. - **Model sophistication alone.** Everyone copies. Edge is in the *proprietary corpus*, *proprietary memory*, and *investment discipline encoded into the system*. Model choices commoditize; what you feed and the discipline you wrap don't. [↑ Back to Map](#map-of-contents) --- # Part VI — Architecture & Implementation > [!note]- Part VI at a glance > Reference architecture, orchestration, where humans sit, failure modes, and a working stack. ## 30. Reference architecture ### Diagram 5 — Reference architecture ```mermaid flowchart TB subgraph Ingest["📥 INGESTION"] I1["Filings · Transcripts · Pricing"] I2["Alt-data · Expert calls · News"] end subgraph Norm["🧬 NORMALIZATION"] N1["Knowledge graph · Entity resolution · Event extraction"] end subgraph Mem["💾 MEMORY"] direction LR M1["Per-thesis"] M2["Per-name"] M3["Per-theme"] M4["Firm-level"] M5["Decisions"] end subgraph Reason["🤖 REASONING — agent swarm + orchestrator"] direction LR Rz1["Origination"] Rz2["Research"] Rz3["Macro"] Rz4["Construction"] Rz5["Monitoring"] end H["🧑 HUMAN GATE<br/>IC review · veto · allocation"] E["💱 EXECUTION<br/>broker · TCA"] subgraph Reflect["🔁 REFLECTION"] direction LR RF1["Decision journal"] RF2["Post-mortems"] RF3["Calibration"] RF4["Pattern mining"] end subgraph Gov["🛡️ GOVERNANCE"] direction LR G1["Audit log"] G2["Model versioning"] G3["Compliance"] G4["LP reporting"] end Ingest --> Norm --> Mem --> Reason Reason --> H --> E E --> Reflect Reflect -. "feeds" .-> Mem Reflect --> Gov classDef data fill:#d6eaff,stroke:#2e6fc9,color:#000 classDef agent fill:#d4f1d4,stroke:#2e8a3a,color:#000 classDef human fill:#fff0d6,stroke:#c98a2e,color:#000 classDef gov fill:#f5d6ff,stroke:#8a2ec9,color:#000 class Ingest,Norm,Mem data class Reason,Reflect agent class H,E human class Gov gov ``` > The reflection → memory loop is what makes the fund compound. Without it, you have an AI-integrated fund. With it, you have an AI-native one. Everything is instrumented. Every agent action → log entry → traceable to source data. Auditability is not optional. ## 31. The orchestration problem The hardest engineering problem. Where early AI-native funds trip. **Naive model:** human asks, orchestrator calls agents in sequence, produces answer. Works for AI-integrated. Doesn't produce native capability. **Native model:** agents run continuously. Origination always surfacing. Monitoring always re-checking. Narrative always watching. Work triggered by events (filing, price move, news) and cadences (daily monitoring, weekly re-check, quarterly post-mortem). Agents call agents dynamically when they need input. The human reviews specific outputs, sets policy, makes allocation calls, vetoes. Requires: event bus (Kafka/NATS), cadence scheduler, work queue with prioritization, typed agent-to-agent protocols. Engineering: Temporal or equivalent + vector store + graph DB + framework (LangGraph, Claude Agent SDK). Real infrastructure. See [[AI Frameworks]]. ## 32. Human-in-the-loop design Where humans sit: - **Policy.** Universes, risk limits, mandate scope. Humans decide, agents follow. - **Material capital commitments.** Every new position above a threshold, every material sizing change, every exit. Real gate, not rubber stamp. Goal: humans see the 5-15 decisions that actually matter and review them deeply. - **Veto.** Any agent recommendation. Logged with rationale. Over time, veto patterns are mined for signal on where human judgment adds value. - **Out of the loop** for: routine monitoring, screening, bulk research, data ingestion, compliance, report drafting (not approval). Concentrate human cognitive load on the highest-leverage decisions. If the GP is formatting an LP letter, the system is broken. ## 33. Failure modes and defenses | Failure | Defense | |---|---| | Hallucination in research | Source verifier checks every factual claim against source docs | | Thesis drift | Memory stores *original* thesis verbatim; forces comparison at every re-check | | Over-concentration from correlated theses | Correlation-aware sizing + factor decomposition agent | | Agent collusion (bull/bear anchor on same framing) | Adversarial prompting, periodic re-seeding, human review of debate transcripts | | Model-provider risk | Multi-model architecture with provider abstraction | | Model capability drift | Deterministic replay of historical decisions on new versions | | Regulatory surprise | Close to counsel, audit-readiness from day one | | Data-source fragility | Redundancy, multi-vendor, data-sovereignty strategy | ## 34. Infrastructure stack Representative, as of 2026: - **Orchestration:** LangGraph or Temporal + custom router. - **Agent framework:** Claude Agent SDK, OpenAI Agent SDK, or purpose-built multi-model runtime. - **LLM providers:** multi-provider. Anthropic for long-context reasoning, OpenAI for structured tasks, open-weight self-hosted for latency/privacy-sensitive. - **Memory:** vector DB (Pinecone, Weaviate, pgvector) + graph DB (Neo4j, RDF) + object store (S3) for raw docs. - **Data pipeline:** ingestion + normalization + knowledge graph (Airflow/Dagster + dbt). - **Event bus:** Kafka or NATS. - **Execution:** prime broker API + algo routing. - **Risk & PM:** bespoke, factor decomposition on MSCI Barra / Axioma, portfolio optimizer. - **Observability:** structured event store, metrics, replay. - **Governance:** RBAC, append-only audit logs, model versioning (MLflow). None of this is exotic. The hard part is building it with **auditability and explainability as first-class requirements**, not retrofits. References: [[AI Frameworks]], [[AI Agents Stack]], [[AI Inference Infrastructure]], [[Recommended AI-native Tool-list Summary]]. [↑ Back to Map](#map-of-contents) --- # Part VII — Day in the Life > [!note]- Part VII at a glance > Two concrete scenarios that make the architecture tangible. ## 35. New idea to position **Tuesday 9:14 am.** A 10-K/A (amendment) drops for a $2B mid-cap. Ingestion picks it up. Event agent classifies — amendments are higher-priority than routine. Narrative agent flags a recent short seller report on this name. **9:16 am.** Orchestrator queues preliminary research. Fundamentals agent reads the amendment vs. original, identifies the specific restated items (inventory, revenue recognition). Forensics agent pulled in — runs Beneish pre- and post-, flags this is a Dechow-style "big bath" reset. **9:42 am.** Preliminary done. Output: accounting risk is live, short thesis had partial merit but was overstated, stock is down 22%, adjusted valuation is interesting. Advance to deep dive. **Wednesday, all day.** Deep dive. Management quality agent reads every transcript this CEO has given at this company + 8 prior companies (41 transcripts). Flags the CEO has been at two prior companies with accounting incidents. Competitive agent notes two competitors moving in with better unit economics. Valuation agent builds 3 DCFs. Bull vs. bear debate. Bear wins — earnings power is lower than bulls argue, accounting is suspect, competitive dynamics deteriorating. **Thursday 10 am.** PM reviews. Reads the debate transcript. Interrogates specific claims about the CEO's history (source verifier confirms; PM wants to read two transcripts directly). PM disagrees on one point — knows a customer who thinks the moat is durable. Second-pass requested. **Thursday PM.** Second-pass finds some supportive data but not enough to flip the conclusion. Pass on the long, too much idiosyncratic risk to short. Done. **Total human time: ~90 minutes over a week. Compute cost: ~$180. Alternative: 2-3 weeks of junior analyst time.** ## 36. Drawdown, unwind, LP cycle **Drawdown.** Position is down 18% in three days on no news. Surprise response agent at open: no material news, no earnings imminent, no insider selling, no factor move, possible forced selling from a value fund with recent 13F-implied redemptions. Monitoring agent re-runs the thesis check — all invalidation criteria still green. Drawdown protocol: "would we buy at this price?" — yes, position is now 25% below fair value. Recommend add. **PM review: 12 minutes.** Sizes add at 30% of remaining capacity. Done. **Thesis unwind.** Held 31 months. Original thesis — a product transition the market underestimated — has played out. Up 140%. Thesis monitor flags substantial playout. Valuation shows current price requires growth above our realistic estimate. Bull tries continuation thesis; bear pushes back. Construction recommends exit. Post-mortem documents: original thesis, actual path, where right, where right-for-wrong-reasons. Calibration updated — we may be systematically underconfident when research-driven conviction is high. PM reviews, exits over 10 days. Two-paragraph exit note for the LP letter (agent drafts, PM edits). **LP cycle.** Quarter-end. Reporting agent has been accumulating: attribution, new positions, exits, composition changes, risk, outlook. Drafts 12-page letter + interactive dashboard where any LP can interrogate live thesis status (with confidentiality and cross-LP isolation enforced). PM edits the narrative — agent's draft is accurate but PM adds thematic emphasis and personal color. **~2 hours of work, down from 2 weeks traditionally.** [↑ Back to Map](#map-of-contents) --- # Part VIII — Strategic Questions > [!note]- Part VIII at a glance > Four questions to answer before building. They shape every downstream decision. ## 37. Build vs. assemble - **Build everything.** Maximum differentiation, maximum IP, maximum cost, slowest TTM. Altbridge-style. Requires large eng team + patient capital. - **Assemble everything.** Fast TTM, low cost, no defensibility. - **Hybrid (recommended).** Assemble commodity infra (LLM providers, vector stores, workflow engines, standard data). Build what's proprietary (agent prompts/methodology, memory schema, investment discipline, proprietary data relationships). Suggests a small-but-senior eng team — 3-5 at launch — focused on the proprietary reasoning layer, not reinventing infra. See [[How to start an AI-native company]]. ## 38. Single-model vs. multi-model Single-model is simpler, faster, exploits specific capabilities deeply. Multi-model is resilient, avoids lock-in, routes tasks to best-fit model. Answer: **multi-model with an abstraction layer, eventually**. Start single-model to ship, migrate once the base system works. Premature abstraction is a real cost. See [[Foundational Models MOC]]. ## 39. What is the GP actually doing? Most important strategic question for the fund's identity. Three answers in increasing order of leverage: - **Operator.** Manages the agent system, sets policy, reviews, makes final calls. Effectively a chief analyst over AI staff. - **Judgment layer.** Specialist in the 5-10 decisions/year where human judgment meaningfully differs from agent output. Tie-breaker and pattern-recognizer for geopolitical intuition, people judgment on management, market mood. - **Taste and narrative.** Sets investment philosophy, makes architectural decisions about what agents do and don't do, is the fund's face to LPs. Think Buffett at Berkshire — day-to-day delegated; what matters is philosophy, allocation discipline, LP relationship. Best answer: all three in shifting proportions. Early fund: more operator and judgment. Mature fund: more taste and narrative. **Trap: staying operator forever.** ## 40. The defensibility question If the tools commoditize, what makes this fund defensible over 5-10 years? Three candidate moats, increasing durability: 1. **Proprietary data and relationships.** Expert networks, channels, management access others don't have. Classic investment moat. 2. **Proprietary investment discipline encoded in the system.** Specific heuristics, sizing framework, bear case methodology, all in prompts and memory schemas. Surprisingly hard to replicate — not in any textbook. 3. **Compounding memory and pattern library.** Longer the fund runs, more post-mortems, better-calibrated agents, more institutional memory. **Widens with time.** ### Diagram 6 — The defensibility stack ```mermaid flowchart TB Y["📚 Year 5-10+ moat<br/><b>Compounding memory & pattern library</b><br/><i>Widens with every post-mortem</i>"] M["⚙️ Year 2-5 moat<br/><b>Proprietary discipline encoded in the system</b><br/><i>In prompts and memory schemas — not any textbook</i>"] N["🤝 Year 0-2 moat<br/><b>Proprietary data & relationships</b><br/><i>Classic investment moat</i>"] N --> M --> Y classDef early fill:#fff0d6,stroke:#c98a2e,color:#000 classDef mid fill:#d4f1d4,stroke:#2e8a3a,color:#000 classDef late fill:#d4e8ff,stroke:#2e6fc9,stroke-width:3px,color:#000 class N early class M mid class Y late ``` > The bailey is (1). The motte is (3). You defend with the bailey while the motte is under construction. If the motte isn't compounding, the fund has no long-term defense — which maps directly to [[AI era Defensibility]] and [[Defensibility Principles MOC]]. LP pitch: (3) as long-term defensibility, (2) as medium-term differentiation, (1) as immediate. Over 5-10 years, (3) becomes dominant. > [!important] If the fund isn't getting better over time, the moat isn't being built > The reflection layer must be doing genuine work. This is the most important internal metric. Directly maps to [[AI era Defensibility]] (motte-and-bailey) and [[Defensibility Principles MOC]] (switching costs, sequential defensibility, embedding). Also [[7 Powers]] if present. [↑ Back to Map](#map-of-contents) --- ## Atomic Note Candidates Extracted from the above — each is a reusable concept that would earn its keep as a standalone note: > [!idea] **Five Sources of Investment Edge** → §5 > Informational / Analytical / Behavioral / Structural / Temporal. Reusable framework beyond investing. > [!idea] **Investment Lifecycle as a Graph** → §6 > 11-stage decomposition (universe → origination → research → ... → ops). Reusable mental model for any capital allocator. > [!idea] **AI-Native vs AI-Integrated vs AI-Assisted** → §2 > The three-gradation definition. Useful for any AI-native product discussion — not just funds. > [!idea] **Synthesis-at-Scale Edge** → §24 > Applies far beyond investing — legal discovery, medical research, intelligence. > [!idea] **Memory & Coherence as Edge** → §25 > Uniquely-AI, durable, subtle. A category of advantage that compounds over time. > [!idea] **Thesis Drift** → §§14, 33 > A named failure mode. Maps back to memory & coherence and to human cognitive biases. > [!idea] **Decision Journal as First-Class Output** → §14 > The idea that decision documentation should be an artifact of the system, not a discipline bolted on. Transfers to PM, product, engineering. > [!idea] **Longitudinal Behavior Edge** → §28 > Tracking management/board/auditor behavior across companies and time. Almost nobody is doing this systematically. > [!idea] **Bull/Bear Structured Adversarial Debate** → §17 > The TradingAgents finding: structured debate beats single-agent analysis. Transfers to any high-stakes reasoning task. > [!idea] **Motte-and-Bailey for an AI-Native Fund** → §40 > Bridges this note to [[Defensibility Principles MOC]] and [[AI era Defensibility]]. Could live as a short atomic note connecting the two. --- ## Related Notes in Vault Flat index of vault links used above, grouped by theme: **Archetype & people** - [[Bill Ackman]] **AI foundations** - [[AI agents]] - [[AI Agents Stack]] - [[Autonomous AI Agents - The rise, potential and challenges]] - [[The Misleading Allure of Anthropomorphizing AI]] - [[How AI Is Rethinking the Way We Reason]] - [[AI Eats Services Not Software]] - [[AI usage is now a baseline expectation]] - [[The AI Timeline - Navigating the Road Ahead]] - [[Observed Exposure — Anthropic's AI Penetration Metric]] **Models, context, reasoning** - [[Large Language Model - LLMs]] - [[Architecture of LLM]] - [[Foundational Models MOC]] - [[Context Window]] - [[Context Layers MOC]] - [[in-context learning]] **Data & knowledge** - [[knowledge graphs]] - [[Knowledge Graphs for Industrial Data]] - [[data lakes]] - [[modern data stack]] **Frameworks & stack** - [[AI Frameworks]] - [[AI Inference Infrastructure]] - [[Recommended AI-native Tool-list Summary]] **Verification & judgment** - [[LLM-as-Judge]] - [[RAG-based verification]] - [[AI Verification]] **Memory, reflection, PKM** - [[090 Personal Knowledge Management - PKM]] - [[Information Evolution, Personal Knowledge & Collective Intelligence]] - [[2025 Year-End Reflection - What Landed, What Didn't]] - [[Maps of Content (MOCs)]] **Defensibility & strategy** - [[AI era Defensibility]] - [[Defensibility Principles MOC]] - [[How to start an AI-native company]] **Fund/capital** - [[VC Fund Performance Metrics]] - [[VC Moc]] --- ## Sources External references that shaped the piece: - [YC RFS Spring 2026 — AI-Native Hedge Funds](https://modelence.com/yc-rfs-spring-2026/ai-native-hedge-funds) - [Bridgewater AIA Labs](https://www.bridgewater.com/aia-labs) - [Altbridge AI](https://www.altbridge.ai/) - [Man Group — AI, Agents and Trend](https://www.man.com/insights/ai-agents-trend) - [TradingAgents / LLM Agents for Investment Management (ACM, 2025)](https://dl.acm.org/doi/10.1145/3768292.3770387) - [virattt/ai-hedge-fund — multi-agent reference implementation](https://github.com/virattt/ai-hedge-fund) - [Building a Local AI-Native Hedge Fund (Tapesh Das)](https://earezki.com/ai-news/2026-04-16-i-built-a-fully-local-ai-native-hedge-fund-system-multi-agent-auditable-no-paid-apis/) - [OpenClaw — Multi-Agent AI Hedge Fund](https://saulius.io/blog/openclaw-multi-agent-ai-hedge-fund-quantitative-trading) - [The Dawn of Hedge Agents (Sify)](https://www.sify.com/ai-analytics/the-dawn-of-hedge-agents-how-agentic-ai-is-transforming-hedge-fund-operations/) - [How AI is Transforming Hedge Fund Operations (CV5 Capital)](https://cv5capital.medium.com/how-ai-is-transforming-hedge-fund-operations-the-future-of-alpha-risk-and-efficiency-5a6cba620cab) --- #kp #deeptech #investing #agents #firstprinciple