# [[Selling AI MOC]]
The enterprise sales playbook is being rewritten. AI doesn't just change the product. It changes how you sell it, how fast you prove it, and what trust even means.
# 1. The Buyer Has Changed
Every enterprise has an AI budget now. Board pressure to adopt. Replacement cycles that used to be 5-7 years compressed to months. [[AI usage is now a baseline expectation]] captures this: stagnation is slow-motion failure. Buyers aren't waiting for outbound. They're self-educating, shortlisting one or two products, and making decisions faster than any traditional RFP cycle allowed.
This creates a paradox. More buyers in market than ever, but each buyer is pickier and moves faster. The old "educate the market" phase is collapsing. If you're still running awareness campaigns when your prospect already watched three competitor demos on YouTube, you've lost the tempo.
For [[AI-first GTM]], inbound matters more. Intent data and predictive scoring (the 6sense layer) become essential because the window between "interested" and "decided" is shrinking. The efficiency gains from AI-native GTM stacks (2 SDRs doing the work of 5, 1 marketer outputting like 3) aren't just cost savings. They're speed advantages.
# 2. Demos as the New POC
The proof of concept is dying. Or rather, it's migrating earlier. The demo is where conviction happens now. See [[Demo Template]].
The best demos create a "lightbulb" moment: showing the AI doing actual human work in real time. Voice agents are particularly effective here because the replacement of labor with software becomes visceral. You hear it working. Procurement agents that collapse a weeks-long purchase request into five minutes. Customer support ag ents that handle escalations with infinite patience in any language.
The tactical shift: run demos in sandboxes seeded with real (or synthetic) customer data. Don't wait for a formal POC timeline to show production-grade capability. The first sales meeting should contain the proof.
This connects to [[Land-and-Expand in Enterprise AI]] but inverts the usual risk. The traditional land-and-expand problem is that pilots don't convert. Here, the demo itself is the pilot. If you nail the demo, the "pilot" stage becomes a formality. If you don't, you might not get a second shot. Enterprise buyers who test an AI product that underperforms are not coming back to try a second vendor. They're burned and backlogged.
# 3. Trust Means Something Different Now
Trust used to mean: we log what humans do on our software. **Trust now means: the AI does the work correctly and we can prove how.**
This is the [[AI Verification]] problem applied to sales. When your product "presses buttons" on behalf of the customer, compliance teams want red-teaming, real-time monitoring, and clear answers on hallucination safeguards. Do you train on my data? How are prompts and outputs logged?
[[Domain Experts as Eval Builders]] is the playbook here. The companies winning trust aren't just waving SOC 2 certs. They're building eval suites, rubric-scored against real operational scenarios. [[Agent Skills as Codified Domain Expertise]] shows how this works in practice: ship the knowledge package with the eval framework attached. The rubric is the moat, and it's also the sales tool.
New AI agent security standards are emerging: enterprise testing, certification, insurance against hallucination liability. Guardrail platforms monitoring non-compliant AI usage across vendors in real time. If your product takes actions in business workflows, expect this to become standard. Come prepared or get disqualified.
See: [Truthsystems](https://www.truthsystems.ai/) | [AIUC](https://aiuc.com/product)
# 4. Outcome Pricing Replaces Seat Pricing
Enterprise sales has always been about the narrow wedge. The change is how fast you have to prove value on that wedge. The expectation is positive ROI within three months. Some buyers expect it immediately.
This collapses the [[Consultancy-to-Platform Transition]] timeline. You can't spend 320 engineer-hours on bespoke deployment and also deliver ROI in 90 days. [[Deployment Velocity]] becomes the sales metric, not just the ops metric. Every doubling of cumulative deployments should compress hours per deployment via [[Wright’s Law]]. If it doesn't, you're selling services, not software.
The pricing unlock: charge for outcomes, not seats. Customer support? Price per ticket resolved. Procurement? Price per purchase order processed. This aligns your revenue with the customer's goals, and it makes the "should we buy this" question trivially easy: the product pays for itself or it doesn't.
The risk: outcome-based pricing only works if your [[Deployment Velocity]] is fast enough that the margin math holds. If implementation takes months, you're burning cash before revenue starts. [[Industrial AI Unit Economics]] applies here: the breakout condition is when deployment hours drop enough that each new customer is profitable from month one.
# 5. Brand as Competitive Moat
When AI has made it faster than ever to build a product, differentiation on "what you do" gets thin. New entrants catch up quickly on features. **Differentiation shifts to "how fast and reliably you do it" and, critically, to brand.**
Customers flock to the first players to reach scale and gain market recognition. This is [[7 Powers]] branding in action: durable attribution of higher value to an objectively similar offering. "No one ever got fired for choosing the market leader" applies harder in AI because the buyer's risk tolerance is lower. They'll only test once.
The [[Incumbent Bundling Risk]] angle matters too. If your ACV is $50-60k and the platform incumbent ships a 60% solution for free, your value proposition collapses. Brand is what keeps you out of that kill zone. Being the recognized leader means you're the safe choice, not the one getting compared on a feature matrix.
Strategic implication: invest in community early. Power users, their enablers (advisors, fractional CXOs), and the network around them become your distribution. [[AI era Defensibility]] through brand velocity, not just product velocity.
# 6. The GTM Team is Evolving
**The sales role has become more technical.** AEs need to understand how agents actually do human work, not just the people using the software. Some companies are collapsing AE and Solutions Engineer roles entirely.
Forward deployed engineers handle integration and tuning. [[Bespoke Engineering in Industrial AI]] shows where the hours hide. The new post-sales role is emerging: part PM, part consultant, part customer advocate. Companies call it Agent PM or Solutions Architect. Usually an ex-consultant with a few years of experience and strong customer presence. As products go more self-serve, this role becomes the core human touchpoint.
The meta-point: as business models shift to outcomes and usage, you have to ensure customers actively use the product. Selling a seat and checking back in a year is dead. [[Team Growth x Product Market Fit]] says only long-term contracts prove PMF. In outcome-based models, sustained usage is the contract renewal.
Reps themselves are using AI to accelerate their own work. Data entry, account research, outbound sequencing, note-taking, all being automated. The rep's job is shifting to deep customer understanding and relationship building. The [[AI-first GTM]] stack makes this possible: fewer people, higher leverage, more time with customers.
## Companies in This Space
- Customer support: [Decagon](https://decagon.ai) built outcome-based pricing tied to ticket resolution rates. CSAT and deflection measured week-over-week during pilots. If the AI resolves the ticket, you pay. If it doesn't, you don't. Clean alignment.
- Procurement: [Lio](https://lio.ai) (formerly askLio) deploys a multi-agent system where a sourcing agent, negotiation agent, and compliance agent work in parallel. Collapses a weeks-long purchase request into minutes. No one enjoys logging into SAP to enter data manually.
- Freight negotiation: [HappyRobot](https://www.happyrobot.ai) runs voice agents that negotiate carrier rates using real-time market data. The demo sells itself because you can literally hear the agent doing the work.
- AI agent security: [AIUC](https://aiuc.com) is building the first AI agent security standard (AIUC-1). Enterprise testing, certification, and insurance against hallucination liability. Think SOC 2 but for agents. Founded with people from Anthropic, developed with Stanford, MITRE, and the Cloud Security Alliance.
- AI governance: [Truth Systems](https://www.truthsystems.ai) builds real-time guardrails that monitor non-compliant AI usage across vendors. Translates internal policies into enforceable rules via browser extension. Already working with $2B+ law firms.
- Sales tooling (the GTM stack itself): [Clay](https://www.clay.com) for data enrichment and account research at scale. [11x](https://www.11x.ai) for autonomous outbound SDRs and meeting scheduling. [Gong](https://www.gong.io) and [Granola](https://www.granola.ai) for call intelligence and meeting notes. These are the tools reps use to compress their own workflows so they can spend more time with customers.
## Links
Core:
- [[AI-first GTM]]
- [[Demo Template]]
- [[Land-and-Expand in Enterprise AI]]
- [[Consultancy-to-Platform Transition]]
- [[Deployment Velocity]]
- [[Industrial AI MOC]]
Competitive & Strategic:
- [[7 Powers]]
- [[Incumbent Bundling Risk]]
- [[AI era Defensibility]]
- [[Technical Moat Assessment Framework]]
Trust & Verification:
- [[AI Verification]]
- [[Agent Skills as Codified Domain Expertise]]
- [[Domain Experts as Eval Builders]]
- [[Evals]]
Growth & Models:
- [[Team Growth x Product Market Fit]]
- [[Industrial AI Unit Economics]]
- [[Wright’s Law]]
- [[Bespoke Engineering in Industrial AI]]
Context:
- [[AI usage is now a baseline expectation]]
- [[AI Capex Super-Cycle]]
- [[How to start an AI-native company]]
- [[3 Hard Truths of Deep Tech Commercialization]]
- [[The Deep Tech Growth Cycle is different]]
- [[Execution x Evolution x Disruption]]
---
Tags: #deeptech #systems #kp #firstprinciple