The AI Enablement Brief · Mar 11, 2026
Less Prompting. More Building.
The more time I spend with Claude Code, the less time I spend prompting. Here's why that's the shift nobody talks about.
Here’s something counterintuitive that’s been happening to me lately.
The more I use AI, the less time I spend prompting.
You’d expect the opposite. More usage, more prompts. More workflows, more back and forth. But that’s not what’s happening. The more I build, the more the prompting disappears — replaced by something that runs on its own.
I didn’t plan for this. I stumbled into it. And I think it changes how we should think about AI adoption entirely.
The Base44 Chapter
It started with mediaplan.ca.
I built it in Base44 — a no-code platform that handles tokens, API access, and billing in a way that made the whole thing feel manageable. It was my first real vibe coding experience: building an entire platform from scratch, figuring out infrastructure, watching something real take shape from nothing.
I learnt a ton. And I became firmly platform agnostic as a result. Not married to any tool or model. I use what works. I switch when something better comes along.
That mindset matters for what happened next.
The Prompt Tax
Most AI usage today has an invisible cost: the Prompt Tax.
Every time you open a chat window, you’re paying it. You write the context. You guide the output. You review, refine, redirect. You navigate between platforms. You remember what you told which tool.
It’s not huge in isolation. But it accumulates. And more importantly, it keeps you in a reactive posture — responding to the tool instead of the tool responding to you.
Microsoft’s 2024 Work Trend Index found that knowledge workers spend an average of 57% of their time on communication and coordination rather than actual creation.
AI has helped at the margins.
But for most people, the interface is still a chat box — which means they’re still paying the Prompt Tax on every task.
Building changes the equation.
The Engine Layer
The shift happened gradually as I spent more time with Claude Code and Claude Cowork.
I started building workflows — not just using AI, but wiring it into the fabric of how I work. Personal workflows. Professional ones. Systems that connect inputs to outputs without me needing to hold them together manually.
And here’s what surprised me: the more I built, the less I prompted.
Claude is particularly good at executing on a specific goal with minimal intervention.
Yes, it asks “do you want to proceed?” more than I’d like. I almost always say yes. But the point is: once the system exists, I’m not writing prompts anymore. I’m making decisions.
That’s the Engine Layer — the infrastructure you build once that handles the execution indefinitely. It’s not a prompt. It’s not a workflow diagram. It’s a running system that knows what to do when you point it at a problem.
I was skeptical at first. I even asked Claude directly: should I move these workflows to Claude Code? It told me I wouldn’t have to navigate multiple platforms and long prompt chains anymore.
It was right.
The One Command Era
Here’s what this looks like in practice now.
I can trigger entire workflows from my phone. Single command. No open tabs, no context switching, no explaining the situation from scratch.
The engine runs. I supervise.
This is what “from producer to supervisor” actually feels like at the personal level — not just a framing for how organizations should use AI, but a lived shift in how individual work gets done.
The friction of prompting is replaced by the leverage of having built something.
It’s a different relationship with the technology. Less conversational, more operational. Less about finding the right words and more about having the right architecture.
And the thing is — getting here required investment. Time spent building instead of just using. Choosing Claude Code when the chat interface would have been faster in the short term. Thinking about systems instead of outputs.
That investment compounds. Every hour spent building the engine is reclaimed tenfold in reduced prompting overhead.
What This Means
The goal was never to get better at prompting.
The goal was to build a system that doesn’t need it.
Most conversations about AI productivity focus on prompt quality — how to write better instructions, how to get better outputs. That’s useful, but it’s optimizing the wrong layer. It’s getting better at paying the Prompt Tax instead of eliminating it.
The builders — the people spending time in Claude Code, wiring workflows, building their own internal tools — aren’t writing better prompts. They’re making prompts irrelevant.
This is the new dividing line. Not between AI users and non-users. Between people who use AI and people who build with it.
Less prompting. More building.
Are you still spending your days in chat boxes — or have you started building the engine?

