# Deployment Velocity
Deployment velocity is the hours required to go from contract signing to a production system running at a customer site. In industrial AI, this is the single most important metric for determining whether a company is a consultancy or a platform.
The math is simple. If every deployment takes 320+ engineer-hours of bespoke work, and you have 5 engineers, you can serve maybe 10-15 customers before you need to hire again. Margins cap at 30-40%. That's a services business ceiling of $3-5M ARR regardless of how good the technology is.
The unlock: automating the bespoke parts. Data pipeline construction (different for every plant and historian vendor), domain-specific event labelling, threshold calibration, model validation. Each of these steps is where manual engineering hours hide.
The test for any industrial AI company: compare deployment hours for customer #1 vs. customer #3 in the same vertical. If hours aren't dropping by at least 50%, there's no learning curve. No learning curve means linear scaling. Linear scaling means consultancy.
[[Wright's Law]] applies here. Every doubling of cumulative deployments should yield a measurable reduction in deployment hours. If it doesn't, the "platform" is really a collection of project-specific scripts.
Related: [[Consultancy-to-Platform Transition]], [[Industrial AI Unit Economics]], [[Bespoke Engineering in Industrial AI]], [[Industrial AI MOC]]
---
Tags: #deeptech #firstprinciple #systems