AI as a Mirror: What it Reveals About Your Operating Model
Rediscovering Product Craft in the Age of AI – Part 3/5
In Part I, we explored how product craft eroded under the weight of rituals and theatre. In Part II, we saw how the banner of "product-led" became rhetoric more than reality. Now comes the harder question: what happens when AI crashes the scene?
Past waves like cloud, mobile, and data reshaped the terrain by expanding what we could build. AI is all that, and something very different. It reaches into the very heart of the product operating model, and into the habits of judgment, collaboration, and product thinking that determine how organizations build in the first place.
For companies with operating models grounded in strong craft and clear product thinking, AI is accelerating learning cycles, sharpening insights, and opening new ways of working. For most others who have scraped by without that foundation, it is amplifying dysfunction and making the cracks impossible to ignore.
That is why the real differentiator won't be access to the tools, as those will commoditise quickly. The differentiator will be the operating model itself and whether it amplifies craft and judgment, or exposes the weakness of decisions and culture.
TL;DR: AI is not replacing product craft. It is exposing whether it exists. Organizations, teams and people fall into three broad patterns: using AI as shortcut to avoid ‘difficult work’ (Replacement), to scale dysfunction faster (Acceleration), or to deepen judgment and insight (Augmentation). Agentic AI approaches will amplify these patterns by acting autonomously on the strengths or weaknesses of your operating model. The hidden costs like ‘judgment debt’ and ‘creativity debt’, now compound faster than most leaders realize.
Three Patterns of AI Adoption
AI has arrived at a moment when operating models are already stretched. Legacy systems, compounding tech debt, underpowered platforms, and top-down fog have made the messy middle messier than ever. And now comes a technology step-function promising speed, synthesis, and productivity gains to make even level-headed execs lose their minds. Just vibe-code and ship, right?
The question isn’t whether organizations should adopt AI., They are, and they should. It’s how they do it that matters. And from what I am seeing across startups, scale-ups, and heritage software companies, three broad patterns are emerging.
The Replacement Pattern: Avoiding ‘Difficult’ Work
This is the most seductive trap. Teams use AI to generate the artifacts of good product work without doing the actual work.
“What’s lost in the guise of efficiency is the ability to think”
Product teams that rarely talk to customers (yes, there are many), are now using AI to spin up synthetic personas and ICPs. The output looks slicker than anything they have produced before. The personas have names, jobs, pain points, and even carefully crafted quotes. But there is no real discovery underneath. No actual customer conversation. No messy insight that might contradict their pre-conceived understanding of the problem.
Strategy decks are glossier too, with market analysis, competitive positioning, and beautifully formatted vision statements. They follow all the right frameworks with perfect headings and corporate poetry but commit to nothing, or worse, everything. What appears to be strategic clarity is often just well-formatted wishful thinking with a suitably large future revenue number tacked on.
Even requirements and specs are getting makeovers. Teams that struggled to articulate what they were building now find their way to comprehensive-looking PRDs. However, the answers to the fundamental questions often remains vague: What problem are we actually solving? For whom? Why now? What are we explicitly choosing not to do?
What's lost in the guise of efficiency is the ability to think. The work looks shiny, but the thinking gets duller. It starts to decay under the optics and illusion of progress, and that is the most dangerous kind of stagnation.
The Acceleration Pattern: Scaling Dysfunction
The second pattern is more subtle but equally damaging. These are organizations and teams that are using AI to do what they have already been doing, just faster.
This hits hard, especially for the many feature-factory companies stuck in build-traps and hoping to vibe-coding their way to shipping more features faster. They continue to mistake velocity for progress even as customer value erodes. The PRDs, roadmaps, and demos all look polished, and the pace is always existential and hectic. However, weak product thinking just accelerates mediocrity.
“When velocity is the primary metric, it drowns out the signals that should guide learning.”
Similarly, companies that were previously bogged down in waterfall processes, often behind a veneer of Agile, can now also utilise AI tools to ship faster. What used to be busywork has become efficient busywork. The dashboards are all green-washed, and the demos are more clever. Features are churning out as engineers push code faster, but things break, and no one is sure why the feature mattered in the first place. Cynicism creeps in as people realise they're going through the motions, moving faster without a clear direction, and then carrying the blame for the mess.
When everything moves faster, feedback loops struggle to keep up. When velocity is the primary metric, it drowns out the signals that should guide learning, and teams lose the discipline of true retrospectives, reflections, and corrections that make progress sustainable. The real cost is not speed but the ability to learn and shift course.
The Agentic Multiplier: When AI Becomes an Actor
The patterns I have discussed so far assume AI remains a passive tool, something we prompt, direct, and control. But the frontier is shifting rapidly. As I write, Agentic AI systems can now plan multi-step workflows, make intermediate decisions, and execute tasks with minimal human oversight. They don’t just generate a strategy document, they analyse data, draft scenarios, test assumptions, write code, and iterate autonomously.
This is no longer science fiction. Engineering teams are using agents that refactor entire modules, run tests, fix failures, and submit PRs. Product teams are trialling agents that conduct research, synthesise findings, generate hypotheses, and even design experiments. Marketing and sales teams deploy agents that personalise outreach, adjust messaging, and optimise campaigns in real time.
The promise is intoxicating. What if you could compress months of exploratory work into days? Or have an agent continuously monitor feedback, spot emerging patterns, and take strategic actions before competitors do.
“When the principles and constraints are not explicit, AI will just as happily optimize the wrong thing beautifully.”
I am all for it. But this is where the mirror becomes unforgiving. Agentic AI doesn’t just reflect your operating model, it starts to act like it. And if the foundations are weak, the consequences will scale exponentially.
An agent instructed to ‘improve engagement’ will quickly learn that outrage, novelty, or guilt drives clicks and start amplifying doomscrolling, fake scarcity, or incessant nudges, because no one has defined what healthy engagement means. Another agent, asked to “reduce technical debt,” refactored brittle code that’s business-critical, breaking customisations key customers depend on. When the principles and constraints are not explicit, AI will just as happily optimize the wrong thing beautifully.
These are not hypothetical scenarios. They are variations of poor decisions that humans make regularly. The difference is speed and scale. What used to fail slowly, with multiple chances for humans to notice and intervene, can now fail catastrophically before anyone realizes the agent was pointed in the wrong direction.
This is why agentic AI amplifies the need for what we’ll explore in Parts IV and V of this series - i.e. judgment, strategic clarity, and operating models that define boundaries and guardrails for autonomous action. The more autonomy you grant, the sharper your judgment needs to be upfront. It will be much harder to course-correct an agent mid-flight the way you might redirect a junior PM.
The Augmentation Pattern: Deepening Judgment
The third pattern looks deceptively similar on the surface but operates from fundamentally different principles.
The few, measured organizations, with operating models grounded in strong discovery practices are using AI to accelerate validation cycles and test more hypotheses, creating space for harder questions about priorities, ROI, and go-to-market strategies. Instead of replacing customer conversations, AI is helping teams have better ones by synthesizing feedback faster, surfacing patterns across interviews, and uncovering edge cases that deserve deeper exploration.
The same patterns are playing out in their delivery. Operating models built on empowered product, design, and engineering partnerships are leveraging AI to free up time for priorities and trade-offs that have a meaningful impact. Yes, they are vibe-coding too to prototype, unblock discovery, and spin up quick experiments and then discard most of them. They are exploring ten times more possibilities before committing to 10x opportunities worth building.
“...operating models grounded in strong discovery practices are using AI to accelerate... ”
What separates this pattern is not the emerging tools, but the ways of working that those tools are evolving. The organizations that have environments where engineers feel safe to challenge assumptions, designers push beyond polish to create real value, and product and sales teams respect each other’s constraints are now best positioned to define the principles, constraints, and feedback loops that use and allow AI the appropriate autonomy. At the same time, humans focus on the areas of judgment and mindset that matter most. Rather than using AI to skip the complicated stuff, they are using it to see more clearly, explore overlooked perspectives and surface the assumptions and blind spots that cloud our judgment.
What these patterns really expose is not just a gap in capability, it’s the underlying debts I have seen quietly accumulate inside many companies. These are debts of judgment and creativity, manifesting as inertia, ritual, and the illusion of progress.
The Hidden Compounding Costs
Technical debt has a nefarius cousin that we rarely talk about: Judgment Debt.
Every time a team shortcuts real customer understanding, they accumulate judgment debt. Every time they ship without conviction, it compounds. Every time they choose velocity over learning, or worse, mistake one for the other, the interest on judgment debt grows.
For years, this debt was hidden by slow processes and bureaucratic friction. When it took months to ship anything, the consequences of poor judgment were distant and diffuse. Bad bets failed slowly enough that you could course-correct, blame market conditions, or simply outlast the decision-makers who had made them.
“...judgment debt compounds when AI helps you make bad decisions faster”
AI has changed the physics of this completely. A questionable idea can now become a shipped feature in hours, not months. And flawed judgment doesn’t just move faster, it will just as quickly get encoded into the workflows, automation, and data that future decisions will depend on.
And if judgment debt compounds when AI helps you make bad decisions faster, imagine what happens when agentic systems make hundreds of micro-decisions on your behalf, each one building on flawed assumptions you never examined. The debt doesn't just grow, it becomes structural, embedded in automated workflows that are far harder to unwind than a single bad feature.
Unlike technical debt, which at least shows up in performance metrics and bug reports, judgment debt often masquerades as progress, albeit without outcomes.
When Organizations Optimize for Measurement Over Meaning
But there's another kind of deficit quietly compounding across teams, Creativity Debt.
When systems are engineered for delivery but not designed for evolution, they accumulate technical debt. When interfaces are assembled without coherence, they accumulate design debt. And when organizations optimise for metrics at the expense of meaning, they accumulate creativity debt, a subtler, more insidious erosion of product soul.
Social media is the clearest example. Every UX choice and algorithm is optimised for engagement. Every tweak is backed by data. Yet the result is something users return to but are left increasingly unfulfilled - or maybe that’s the point. The soul of the product fades, and it drifts toward the aptly named "enshittification." This is what happens when we optimise for measurement over meaning and accelerate without pausing for context.
“When tools make it easy to remix what already exists, it becomes increasingly challenging to create something original”
AI doesn't just raise questions of speed or output, it threatens creative capital. This is not through malice, but by optimising convenience. When tools make it easy to remix what already exists, it becomes increasingly challenging to create something original. What's lost isn't efficiency, but perspective. And I say this even as I use AI to create images for these blogs and have Spotify playing Enlly Blue soundtracks in the background (do your own research 😉). The irony is not lost on me.
My point, however, is that creative capital isn't some spark of genius reserved for the few. In my experience, it's most effective as a system-level capability. It's the architectural decisions that make future change easier, not just current features possible. It's the design systems that create coherence across touchpoints, not just pretty screens. It's the product choices that consider second-order effects and long-term trust, not just short-term optimisation.
And it comes from cultivating what machines still can't quite replicate: our human intuition, shaped by constraints, diverse lived experiences, messy insights and neural wiring that can make leaps, and create moments of brilliance that defy probability. That's the kind of creativity that gives products their distinctiveness, the part that feels human.
Why These Debts Are Compounding Faster With AI
AI is a genuine unlock. It can strip away noise, surface patterns in oceans of data, and test more hypotheses faster than human teams ever could. It can free teams from analysis paralysis, compressing what once took weeks into hours. It is pushing us to imagine workflows that don’t follow deterministic, step-by-step playbooks, but instead embrace probabilistic exploration, where you generate, refine, and iterate toward better options.
“... the harder work of reimagining workflows, culture, and judgment remains untouched. ”
At the same time, it is making it easier than ever to look productive while quietly accumulating judgment and creativity debt. The imperative to "do AI" is not only revealing weak practices, it may be entrenching them. It's giving existing theatre the gloss of productivity and progress.
And if you need evidence of how widespread this pattern is, consider the recent MIT headline: "95% of enterprise GenAI pilots are failing." It sounds like a crisis for AI, but it's really a mirror on organizations. The technology is real, even if you are sceptical about the hype. The problem is that most companies are hamstrung by inertia, complexity, and decision gaps. AI pilots often become bogged down in legacy workflows, approvals, and optics, or succumb to passive resistance to change.
Execs are reaching for AI, mostly through FOMO or as a superficial fix, while the harder work of reimagining workflows, culture, and judgment remains untouched. History tells us this isn't new. The internet, mobile, and cloud all went through the same awkward adoption lag and produced their share of winners and martyrs. The difference is that AI is making the dysfunction more visible, faster, and potentially with graver consequences.
AI as Mirror, Not Oracle
The real risk is not that AI replaces product craft or product thinking. It’s that many organizations have lagged behind, clinging to operating models where judgment, creativity and thinking have been buried in rituals and optics. Those companies will struggle to stand out and compete in an AI-abundant ecosystem where everyone has access to the best tools and models.
AI will not destroy or disrupt product craft, but it is reflecting how the operating model responds to stress. The organizations that thrive will be those that build judgment and creativity into the core of their operating models, so AI becomes a capability that serves strategy rather than substitutes for it.
“...it is reflecting how the operating model responds to stress.”
This is not a paradox to be solved. It's a new form of organisational intersection and a deliberate, evolving relationship between machine intelligence and human responsibility. And it's in that intersection, where insight and craft meet intent and capability, that the next generation of great products will be, and are already being, built.
This is not Agile’s fault. It’s what happens when we adopt the visible structure but forget the deeper, not-so-visible, intent. We have become too caught up in implementing the form and the comfort of rituals at the expense of thinking. Even Kent Beck, one of the manifesto’s authors, lamented that they wanted to free developers to build better software, not spawn a certification industrial complex. And the fact that we now have "SAFe Agile", a framework so elaborate it needs its own map, is the perfect symbol of bureaucracy and process insanity disguised as progress.
Next: In Part IV, we’ll explore what it takes to build in that intersection, how the product operating model can evolve from today’s output-driven routines toward work centred on judgment, creativity, and better decisions. We will look at the foundational capabilities that separate organisations positioned to thrive from those still struggling to connect their intent, insight, and impact.