> I've been interviewing many designers to come work with us at The Studio, below are some observations and refections. I think the role of a designer is never really defined by what they can do. Not their taste, not their UX instincts, not their Figma skills. It is defined by what they can't do. Designers can't write production code, so they draw pictures of what code should produce. Engineers can't design, so they interpret those pictures as best they can. PMs can't do either, so they orchestrate from the middle. **The wall between disciplines is the job description.** > Those walls are coming down. Engineers can design better. Designers can build. PMs can do both. And when those walls disappear, what's left isn't three separate roles. It's one thing: **builder**. This connects to something I keep coming back to with [[Founder Mode]]: **the best outcomes happen when people engage directly with the material instead of delegating through layers.** ![[Screenshot 2026-02-14 at 23.25.14.png]] So why does the wall exist in the first place? A button on a website isn't a "rectangle." It's a real UI element, usually `<button>`, styled with CSS, wired up with JavaScript. It has a component name, props, design tokens, event handlers, state, accessibility attributes. In Figma, a designer builds something visually similar with auto-layout, nested frames, padding, color styles. But it doesn't map 1:1 to how an engineer builds it in React. One is a visual model. The other is an interactive system. This is [[Abstraction]] at work, two different abstractions of the same thing with no shared grammar between them. That gap is why handoffs always feel ***lossy.*** You send a "pixel-perfect" spec and get back something that looks close but doesn't feel right. Not because the engineer is bad. They're translating between two languages with no shared grammar. ![[Screenshot 2026-02-14 at 23.25.23.png]] [[AI coding tool|AI coding tools]] are the translator. Describe a UI in natural language, the model converts it into real divs, CSS, component structure. You don't need to become fluent in code. You need to know enough about the material to steer the output. And that bar for "enough" gets lower every month. This is the same dynamic as [[Deployment Velocity]]: the tools are compressing the time between idea and working thing. The bigger shift is that the material of design itself has changed. In industrial design, the material is wood, metal, plastic. In traditional digital design, it was pixels. In 2026, **the material is data**. Real [[API]] responses, real [[Large Language Model - LLMs|LLM]] outputs, real user inputs flowing through real logic. ![[Screenshot 2026-02-14 at 23.25.32.png]] The product isn't the screen. **It's the intelligence underneath.** And if you're designing that product in static mocks, you're shaping a replica, not the real thing. This is exactly what I've been thinking about with [[The Headless Web Transition]]: data structure is the new branding. The interaction logic, the workflow intelligence, that persists across whatever interface gets generated. Your beautiful design system becomes a suggestion. I watched a designer demo a safety product he built recently. Block-based editor, all running on real endpoints. None of it was mocked in Figma first. He started in code with real data from day one, because you simply can't evaluate an AI experience without the AI actually running. You can't mock up intelligence. You have to build it. And when you do, you get to validation and conviction way faster than the old loop of static mocks, debate, handoff, build, test. The risk profile of building has fundamentally changed. It's now as cheap to make the thing as it is to argue about a picture of it. The [[Friction Frontier]] has shifted: what used to be the expensive, slow part (building) is now fast and cheap. **The expensive part is having the taste and judgment to know what to build.** ![[Screenshot 2026-02-14 at 23.25.41.png]] Something practical that clicked for me: building isn't one mode. It's a spectrum. - Zero-to-one is experimental. String together some APIs, an LLM endpoint, a database, test if the concept even works. Code is messy. You're learning, not shipping. This maps directly to [[API First Development]] and the modern stack outlined in [[Building Mobile Apps Today]] and [[React App Foundations]]: authentication, payments, database, all modular and ready to plug together. - Feature work is the middle ground. You have an existing product and want to add something. Build it with real data, test it in context, iterate before pulling in a full engineering team. You're not handing off a spec, you're handing off a working prototype. This is the [[Consultancy-to-Platform Transition]] applied to design work: productize what used to be bespoke. - Then there's polish. CSS variables, scroll-triggered gradients, transition timing, padding tokens. Taste-driven adjustments that used to require a redline spec for an engineer to interpret. Now you just do them yourself. You bounce between all three constantly. Knowing which mode you're in tells you how much to care about code quality vs. fidelity vs. architectural rigor. ![[Screenshot 2026-02-14 at 23.25.59.png]] A few tactical things worth noting. You don't need to memorize your codebase. Tell the model "put the same search header on this page that we use on the other page" and current-gen models find and reference the existing component. MCP servers for your design system give the model a vocabulary of approved components and tokens. Chrome DevTools is becoming more useful than Figma for a lot of this work, showing you nested divs, alignment, spacing in the actual product. And natural language is becoming enough. Six months ago a scroll-triggered gradient required a debugging session. Today you just describe it and the model handles it. Tooling workarounds like saved snippets and template libraries might be transient. Natural language is the durable interface. The models themselves are moving fast too. Six months ago, models rewrote everything from scratch and styled inline. Now they reference existing components, respect design systems, produce code that passes engineering review. So invest less in scaffolding for model limitations. Invest more in describing experiences clearly. This is [[Technology Intimacy]] accelerating: the gap between intent and execution keeps shrinking. So how do you know if what you've built is right? You feel it. Do you reach for it? Does it solve a problem? Is it fluid? Does it break? If you don't have PMF with yourself, it's not there yet. And the best way to run that test is when the intelligence is real, not mocked. What's [[AI era Defensibility|defensible]] in this new world? Not the pixels. Not the Figma file. The [[On the need for great design|taste, common sense, and authenticity]] that no model can replicate. The [[Supercredibility]] of someone who has built, shipped, and iterated on real products. The [[domain specific sense-making]] that separates a plausible-looking interface from one that actually works for the people using it. ![[Screenshot 2026-02-14 at 23.26.11.png]] The walls are coming down. What's on the other side is a lot of fun. --- Links: - [[On the need for great design]] - [[The Headless Web Transition]] - [[AI coding tool]] - [[Intimate Interfaces and Sensory AI]] - [[Technology Intimacy]] - [[AI era Defensibility]] - [[Deployment Velocity]] - [[Friction Frontier]] - [[Step 4 - Product and Design]] - [[Abstraction]] - [[Founder Mode]] - [[First Principles and Mental Models MoC]] --- #kp #deeptech #design #ai