On the Discipline Problem
Experienced developers are getting slower with AI tools, not faster. Nineteen percent slower, according to one study. The juniors are speeding up. The experts are drowning.
This inversion tells us something important. The tools generate at a pace that breaks our intuitions about knowledge work. Sixty thousand lines in an afternoon. The generation is nearly free, costs falling a thousand-fold annually. What's not free is verification: catching the confident errors, the hallucinations, the security holes introduced by a model with no concept of consequences. That work falls on the experts. They're spending their days refuting slop.
Generation scales with compute. Verification scales with human attention. The asymmetry is the whole problem.
The bottleneck is not model intelligence. The models are intelligent enough. The bottleneck is discipline: the absence of systems that scale quality at the same rate the models scale output.
Software engineering is experiencing this first, on a compressed timeline. Every knowledge domain where AI produces faster than humans can verify will face the same asymmetry. Law. Medicine. Finance. Research. The pattern is universal; only the clock speed differs.
The solution is a framework I call the Cognitive Cycle. Four phases. One principle: humans own intent and constraints, machines own execution.
Phase I: Compression
This phase establishes the what.
The human's job is to compress intent into an irreducible unit. What do you actually want? What does success look like? The process is interrogative: asking questions until you've distilled the goal to its most succinct possible form.
The output is an unambiguous definition of success, expressed in terms precise enough to be automatically verified. Not "make it work" but "pass these linters, meet these security protocols, pass these tests." These constraints become the quality gate.
This is the highest-leverage work now available. Agents hit what Dex Horthy calls the "dumb zone" when context overflows; more input produces worse output. The discipline here is curatorial: compress ruthlessly. The agent amplifies whatever environment it's given. Feed it disciplined constraints and it scales discipline. Feed it chaos and it scales chaos.
Phase II: Alignment
This phase establishes the how.
The agent takes the compressed intent and figures out how to operationalize it. It synthesizes documentation and codebases into a Research Artifact (a snapshot of the system as it actually exists). It generates an Implementation Plan with exact file changes and test procedures. Paint by numbers.
The human reviews the plan, not the output. This is the critical shift: reviewing a plan is higher leverage than reviewing code. You catch misalignment before generation, not after. Feedback flows asynchronously, like comments on a document. The human stays in the loop without being in the way.
Phase III: Execution
This phase is the do and verify.
The approved plan hands off to an orchestration layer that manages parallel execution. Sub-agents work simultaneously. Verification runs continuously against the constraints from Phase I (the "tea kettle whistle" that confirms correctness faster than any human review could).
The human is elsewhere, doing other work, protected from the context-switching that studies suggest consumes 40% of productive time. This is where velocity gains hit 5x to 7x.
Phase IV: Growth
This phase is the codify.
The work isn't done until what occurred has been distilled into reusable patterns. An agent successfully implements OAuth integration after three failed attempts. The first two failures came from context overload (the dumb zone again). The team compressed the input, the third attempt succeeded. Now the failures get codified as test cases. The successful approach becomes a template. The next engineer to touch authentication starts from higher ground.
The challenge: this is harder than it sounds.
Defining success with precision requires thinking most people skip when they can just "try things." The tool makes exploration so cheap that disciplined constraint-setting feels like friction. It's not. It's the work.
The tool actively resists discipline. When you can generate a solution in thirty seconds, spending thirty minutes defining constraints feels wasteful. But without those constraints, you spend three hours verifying that the thirty-second solution doesn't break something else. The math favors discipline. The psychology doesn't.
Most organizations haven't internalized this. They're stuck in the "semi-async valley of death": too automated to be manual, too chaotic to be automated. They've deployed AI assistants but haven't built the verification systems to trust them. They're running agents but reviewing every output line by line, losing time in both directions (the cognitive load of supervision without the velocity of autonomy).
They keep chasing model upgrades, always one API away from productivity gains that never arrive. The bottleneck keeps moving, and they keep misidentifying where it is.

But there's a deeper problem, and it's human rather than technical.
If the discipline framework works (if humans really do shift from production to constraint-setting, from making to curating) then the skills that made someone good at their job five years ago become necessary but insufficient. The expert developer who built intuition through thousands of hours of coding now needs different intuitions: what constraints to set, how to verify outputs they didn't write, how to maintain mental models of systems they no longer build by hand.
This is genuinely disorienting. Professional identity is wrapped up in craft. The surgeon's identity is in the steadiness of their hands. The lawyer's identity is in their ability to construct arguments. The developer's identity is in their ability to write elegant code. When the valuable work shifts from execution to specification, from doing to defining, these identities don't transfer cleanly.
Some will adapt. They'll find that the new work (compressing intent, setting constraints, designing verification systems) is its own form of craft, with its own satisfactions. Others will resist, insisting that the old skills still matter, that the human touch is irreplaceable, that the tools will never really get good enough. They'll be half right. The old skills do still matter. But they matter as inputs to a new kind of work, not as the work itself.
The organizations that figure this out first will have a compound advantage. Not just because they'll move faster, but because they'll accumulate institutional knowledge in ways their competitors can't match. Each cycle through the framework deposits another layer of codified expertise. The gap widens.
The ones that don't figure it out will plateau. They'll have access to the same models, the same tools, the same theoretical capabilities. But they won't be able to convert capability into velocity because they haven't built the discipline layer that makes velocity sustainable.
The question worth asking: what's actually constraining your velocity? If the answer is model capability, wait six months. The models are improving fast enough that patience is a viable strategy.
But if the answer is discipline (the absence of clear constraints, reliable verification, systematic knowledge capture) then waiting won't help. The tools exist. The frameworks exist. What's missing is the willingness to do the harder, less obviously productive work of making them systematic.
That's the discipline problem. And it's not going away.