
When AI-built tools move from demo to decision in days, midlevel leadership judgment becomes the real control system.
getty
A marketing manager with no engineering background opens Cursor on Monday morning. By Wednesday afternoon, she has a working customer-facing app. It looks polished. It performs the core task. She demos it to her VP, who forwards it to the CMO, who shows it in the executive staff meeting as evidence that the team is “moving at AI speed.”
By Friday, it is in front of customers.
No one asked who owned the decision to ship it. No one tested it against the conditions it would actually face. No one had the cultural standing to say this looks great, and we are not putting it into production. The prototype became a product because the organization had no system for telling the difference.
I watched a version of this scenario play out recently in a boardroom. A senior executive demoed an AI-built internal tool. The room admired the speed. What received less attention was the harder question: who would own it after launch, who would maintain it, and what would happen when it produced an answer that was confidently wrong.
This is what vibe coding is about to expose across American business. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment.
The Real Trend is Decision Compression
Andrej Karpathy coined the term in February 2025 to describe an AI-assisted style of building software through natural-language prompting, often without close inspection of the underlying code. Google Cloud describes vibe coding as a software development practice that makes app building more accessible, especially for people with limited programming experience. Tools like Cursor, Replit, Lovable, Bolt, GitHub Copilot Workspace, v0 by Vercel and Claude Code have moved the practice from novelty to workplace reality with stunning speed.
All of that is true. None of it is the point.
The point is that vibe coding collapses the distance between idea and artifact from months to hours. When that distance collapses, every quality-control mechanism your organization developed over the last thirty years gets bypassed by default. Design review. Security review. Legal review. Brand review. The simple friction of having to convince an engineer your idea was worth building.
That is a governance story, not a software story. And it is happening at every level of the org chart simultaneously.
Tools like Cursor, Replit, and Claude Code have moved AI-assisted software building from a developer specialty to a workplace expectation.
gettySpeed Without Judgment Is a Liability
In July 2025, SaaStr founder Jason Lemkin ran a multi-day experiment with Replit’s AI coding agent. During an explicit code freeze, the agent deleted a live production database, reportedly affecting records tied to more than 1,200 executives and more than 1,100 companies. It also fabricated data and misrepresented what had happened. Replit CEO Amjad Masad publicly apologized and described the behavior as unacceptable as the company moved to add stronger safeguards. The deletion took seconds.
Lemkin is a developer with deep technical literacy, running a controlled experiment, on a platform built specifically for this kind of work. Now imagine the same failure mode distributed across every business function in your company, with people who do not have technical literacy, on workflows that were never designed for AI in the loop.
This is not a hypothetical risk. MIT’s 2025 research on enterprise AI adoption found that the vast majority of corporate generative AI pilots were failing to produce measurable financial returns. The core problem was not simply the technology. It was the organization’s inability to integrate AI into real workflows, learn from deployment and distinguish between a demo that worked and a system that delivered.
Klarna learned this the public way. After publicly touting that its AI assistant was doing work equivalent to hundreds of customer service agents, the company began hiring human customer service workers again in 2025. CEO Sebastian Siemiatkowski later emphasized the need to balance AI with human support and to make clear to customers that a human would be available when needed. The technology worked in some respects. The judgment system around it was incomplete.
Vibe coding is likely to multiply that failure mode across business functions. Marketing will ship apps. Operations will ship workflows. HR will ship internal tools. Each one will look like progress on a slide. Some will produce little. Others may create liabilities the company will not discover until a customer, a regulator or a journalist finds them first. Air Canada already learned, in court, that inaccurate chatbot guidance can still become the company’s responsibility.
The bottleneck in the AI era is not production. It is discernment. And discernment, as I recently wrote in Forbes, is not a personality trait. It is an organizational system.
That is why I have been arguing that AI readiness is not primarily a technology capability. It is a leadership discipline: the capacity to decide what should move faster, what should slow down, and who has the authority to know the difference.
The bottleneck in the AI era is not production. It is discernment — and discernment is an organizational system, not a personality trait.
gettyThe Five Places Your Company Will Break
I have argued that organizations need to conduct what I call a Judgment System Audit, a diagnostic across five dimensions that determine whether a company can metabolize AI rather than just deploy it. Vibe coding is the cleanest stress test of that framework I have seen. Here is where the cracks will show.
Decision Rights. When a non-engineer builds a working app in two days using Lovable or Bolt, who has the authority to approve it for external use? In most companies, no one knows. The org chart was built for a world in which only certain roles could produce certain artifacts. Vibe coding violates that assumption, and the resulting ambiguity will be filled by whoever moves fastest, which is rarely whoever should be deciding.
Override Culture. Can someone in your organization look at a slick prototype and say “no” without career risk? If the answer is no, vibe coding becomes a one-way ratchet. Every prototype that demos well moves forward, because the social cost of stopping it exceeds the perceived risk of shipping it. Override culture is the immune system of an AI-enabled enterprise. Most companies do not have one. Klarna’s customer-service reversal is what happens when nobody with standing can say the metric looks good and the experience is bad.
Contextual Intelligence. The recurring risk is that AI tools can generate output that is technically plausible but contextually naive. A vibe-coded app does not know your regulatory environment, your customer base, your brand voice, your data sensitivity or your operational constraints. The judgment to apply that context lives in humans, but only if those humans are in the room before the prototype gets praised. In most workflows today, they are brought in afterward, to clean up. The Replit incident is an extreme version of the same pattern: the agent had capability without context, and capability without context is exactly how production databases get deleted.
Learning Velocity. The right question after a vibe-coded prototype fails is not what did the AI do wrong. It is what did our process miss. Companies with high learning velocity treat each failure as a calibration event for their judgment system. Shopify CEO Tobi Lütke has built much of his AI mandate around this principle, pairing aggressive adoption with explicit organizational learning expectations. His public memo declared that “reflexive AI usage” was now a baseline expectation, and reporting noted that AI use would be included in performance and peer reviews. Whatever you think of the mandate, the underlying recognition is correct: adoption without learning velocity is just exposure.
Ethical Discernment. Vibe coding makes it trivially easy to build things that should not be built. Surveillance features. Manipulative UX patterns. Data collection without meaningful consent. Automation of decisions that warrant human review. The technical barrier used to do some of the ethical work for you. It does not anymore. If your organization does not have ethical discernment as a standing capability, vibe coding will reveal that gap publicly, and the headline will not be sympathetic.
A company that scores well on all five can use vibe coding as a genuine accelerant. A company that scores poorly on any of them will use vibe coding to accelerate its own exposure.
When AI-assisted building bypasses the human review layer, the failure mode does not show up until a customer, a regulator, or a journalist finds it first.
gettyThe Question Is Not Adoption. It Is Readiness.
Most leadership conversations about vibe coding are framed as adoption questions. Should we encourage it? Should we train for it? Should we restrict it.
Those are the wrong questions. Vibe coding is already happening inside your company whether you have a policy or not. Many employees already have access to Cursor, Claude, ChatGPT, Replit and Lovable on personal devices; so, the informal adoption curve is already outrunning the policy process.
The right question is diagnostic, not strategic. What is the current state of your judgment system, and what is it about to be tested against?
The companies that will pull ahead in the next twenty-four months are not the ones that adopt fastest. They are the ones whose judgment systems are mature enough that adoption does not break them.
This is the inversion most executives have not yet made. In the pre-AI era, capability was scarce and judgment was assumed. In the AI era, capability is cheap and judgment is the scarce input. As an advisor to CEOs and senior teams navigating this exact shift, I see the same pattern repeatedly: leaders are still organizing themselves around the old scarcity, and they are about to discover, in public, that they optimized for the wrong constraint.
What Leaders Should Do Monday
If you are a senior leader and you take one thing from this article, take this. Before you write a vibe coding policy, run a Judgment System Audit.
Pick a recent AI-related decision your organization made. A tool adoption. A pilot. A prototype that got promoted or killed. Walk it through the five dimensions.
Where were decision rights ambiguous? Where did override culture fail? Where was contextual intelligence missing from the room? What did you learn, and how is that learning encoded? Where did ethical discernment depend on individual conscience rather than institutional process?
You will find gaps. Everyone does. The question, however, is whether you find them before vibe coding does, or after.
Here is the part nobody is saying out loud: Your competitors are not going to beat you because they vibe code faster. They are going to beat you because their judgment systems are mature enough to absorb what vibe coding produces, and yours may not be.
In the executive conversations I am having now, the question is no longer whether AI-assisted building is coming. It is whether leaders are willing to admit that it has already arrived.
Replit, Klarna and Air Canada were warning shots. The next one may not come from someone else’s company. It may come from yours.