From Movement to Mastery: Deliberate Product Craft in the Age of AI

Rediscovering Product Craft in the Age of AI – Part 5/5

We have spent four parts examining what is broken, why it matters, and what AI is revealing about how we actually work. We deciphered the erosion of craft, the slide of “product-led” into rhetoric and jargon, and then we looked in the mirror that AI is holding up to expose our sub-optimal operating models.

It has been a deliberately introspective journey to get to the question that matters: what do you do about it?

Let me say again, there is no playbook for any of this. You and your company’s AI maturity may be similar, but it is not in the same context as someone else's, regardless of what your newly sprouted ‘AI expert’ claims. We have already established that the people, teams and companies that outperform and win are not just copying templates. You need to build your own operating models grounded in your vision for your business, technology, and people. And that has never been truer than now with the impending disruption of AI. As with all waves, you have to embrace the discomfort and uncertainty that comes with this AI experiment as the price of creating something genuinely yours and uniquely different for your users and your market.

So what follows is not a prescription by any means. It is an invitation to catalyse and evolve product craft in small, meaningful ways. That is what this moment demands.


TL;DR: The future is not guaranteed to those racing to adopt AI fastest. It will most certainly belong to those who pair human judgment with machine capability most wisely. That begins with seeing how your current operating model truly works, what problems really need solving, deciding where judgment should live, and treating every decision as a source of learning. The real work is not about adopting the latest technology or tools. The messy work is still about growing collective wisdom and nurturing the creative capital that makes it worth anything.


The Moment We Are In

We are living through one of those rare moments when everything changes suddenly and fast enough that everyone notices, but also slowly enough that most people think they have time.

Hundreds of billions are flowing into AI, venture capital, sovereign wealth, and big tech R&D budgets that dwarf the GDP of small nations. The hype cycle is at fever pitch. Every company claims to be "AI-native" or “AI-first”, every consultant sells transformation, every government is scrambling to regulate (or exploit) what it doesn't understand while racing to build capability it cannot afford to lose. The economics remain completely unproven, and most AI companies are burning cash delivering capabilities at prices that don't yet work. Business models are being invented in real time.

Beneath the noise, something real is happening. Code that took days now takes hours. ‘Vibe coding’ is adulting into ‘velocity coding’ by real engineers. Research synthesis that used to require teams and weeks can now happen overnight. Design exploration that was bottlenecked by human bandwidth can generate hundreds of variants before lunch. Agentic systems, AI that plans, acts, and iterates with autonomy, are moving from research papers to production systems faster than most organizations can adapt their governance models.

... the near horizon is not about getting AI ‘right.’ It’s about developing new muscles ...

The human response is exactly what you would expect. It's a mix of fear and opportunism, denial and premature certainty. Some see existential risk. Others see existential opportunity. Some are exploiting the moment to sell snake oil. Others are avoiding engagement entirely, hoping to wait out the hype. Regulators are trying to write rules for technology that is evolving weekly. Ethicists are raising essential questions about bias, transparency, and control that should matter deeply, but are too uncomfortable to answer. No one actually knows how this all unfolds. But that has never stopped people from claiming they know it all.

But here is what I know from everything I have learnt:

Moments like this demand conscious builders and shapers of the frontier!

We are at the first inflection point in a long and tumultuous future, and most people, companies, and even countries are not prepared for the change. So, the near horizon is not about getting AI ‘right.’ It’s about developing new muscles and adapting our metaphoric DNA for learning and creating in these high-uncertainty conditions. It's about exploring beyond perceived obstacles. To consciously measure what really matters, not just what moves clicks and likes. To make better judgment calls when the data is ambiguous and the stakes are real.

This moment will reward builders who can handle complexity without collapsing into either techno-optimism or techno-paralysis. People who see AI as both a genuine capability and an overhyped tool. Who understands that speed is becoming abundant, but wisdom remains the differentiator. Because here's the thing: when the tools commoditize, and they will, the only durable advantage is how thoughtfully you use them. That is the discipline you build, for your life and your craft, through deep discovery, repeated iterations and deliberate practice.


Beyond the False Choices

Let us start by dismantling some unhelpful framings.

Much of the noise around AI, automated PRDs, AI copilots, vibe-coding, agentic AI, which LLM is “better,” or whether every country should have its own sovereign model, is mostly theatre of false choices. Even the loudest debates about AGI versus “narrow AI” seem to miss the point. These are all valid discussions that become distractions as they continually drift from the question that actually matters:

What problem are you trying to solve?

The real difference is not whether you fine-tuned your model on a trillion tokens or whether your agents can chain ten steps together without breaking. The question is whether you or your organisation knows why it is building in the first place.

Vibe-coding is a poster child of this. For a brief, dazzling moment, it looked like the future. Anyone could prompt an LLM to build an app, raise a round, and call it a company. Code generation became a party trick, and for prototypes, it worked beautifully. But as the best practitioners have been trying to point out, prototype code is not production code. Syntax is not the same as designing systems. The bottleneck in software was never typing speed and having ‘hands on keyboards’. It was logic and reasoning, elegant architecture, and trade-off management. As AI has made generation easy, the value of engineering judgment is going up. And thankfully, even as I write this, the evolved concept of velocity coding has been coined by real engineers, who understand how to shape architectures and AI evals and guardrails to build with quality and a pace. 

I have also worked with teams at the other end of the spectrum, faced with existential platform transformation challenges. New leadership, new ideas, deep domain experts, siloed old-school playbooks and historical baggage - i.e. all primed for disruption and change but hobbled by inertia. Without a shared and credible vision, acknowledgement of current realities, and without well-aligned expectations, the friction multiplies. Everyone is working hard. But progress is hard to measure without a cohesive agreement on what and how. Nothing is technically wrong. Everything is emotionally off. This is not a tooling deficit that AI can solve, it's an operating model problem that needs human work to unlock the full potential of the teams. 

The near-term labour disruption is real, but the long-term advantage will almost certainly belong to people and companies that buck the trend to invest in building deep craft and expertise.

The same game of illusions is continuing in the global job market. Some executives believe AI will make engineers or product managers obsolete. They are mistaking cost-reduction for progress and innovation. Others are paying absurd premiums for “AI talent,” hoping to buy genius while graduates are struggling to find their first jobs in tech. However, some companies are already quietly relearning that much of problem-solving and innovation requires human expertise, not just automation. The near-term labour disruption is real, but the long-term advantage will almost certainly belong to people and companies that buck the trend to invest in building deep craft and expertise. 

And then there is the grandest quest of all - the quest for artificial general intelligence. The valley’s self-proclaimed prophets treat AGI as the inevitable next species. Meanwhile, the daily news cycle and relentless trolling and AI slop on social media offer overwhelming evidence that human intelligence remains very much a work in progress. The “Florida Man” has not been confined to Florida for a while now. Let's face it, the bar for ‘general intelligence’ is still quite open to debate.

These are the false choices of the moment: speed versus substance, automation versus agency, prediction versus practice. They create noise when what we need desperately is clarity. The antidote is not cynicism or blind faith. The real work is in practising calm judgment and discipline to use AI as a lever for better understanding of ourselves and our craft.


The Few Things That Actually Matter

Evolving your operating model for the AI age is less about doing everything differently and more about paying attention to the few places where it matters most.

1. Make Learning Visible

Almost every organization treats “learning” as an HR perk and an optional benefit. Perhaps there is an Udemy subscription, a minimal budget for attending conferences and almost certainly a bigger budget line item for leadership off-sites. Inside the operating model, it's buried under retrospectives and post-mortems, if it happens at all. Ironically, companies also offer high salaries to hire expertise and executives and pay millions for consultants because deep down they know that ‘learning’ is what creates advantage and ability to compete over time. The companies and teams that pull ahead are the ones that are constantly able to learn by doing and every release, every decision, every mistake becomes usable knowledge to grow and change.

Instead of treating learning as a cultural slogan, make it a core operating principle that is embedded in your processes and habits. The more you adopt AI tools to amplify and automate your work, the more deliberate your learning loops need to be.

Make those loops visible. Capture what you believed would happen, what actually happened, and what you would do differently. Build a lightweight decision library that is searchable and used, not a museum of decks. When similar situations return, you will have institutional memory rather than institutional amnesia.

AI can help with this work by surfacing patterns across decisions, revealing which assumptions consistently fail, and showing where effort and impact were misaligned. But it cannot care on your behalf. Closing the loop between decision and outcome remains human work.

  • Learning is how your organisation converts experience into judgment. Use these prompts to find the gaps and tighten the loop.

    What’s considered learning

    • When you say “we learned X,” what actually changed in how you decide or build?

    • Which kinds of mistakes are considered training and learning, rather than failure?

    Where it happens

    • In the last quarter, where did real learning occur? Was it in discovery, delivery, escalations or sales?

    • Which of all your meetings consistently produce new understanding and actions, rather than status updates?

    How it is captured

    • For your key bets, do you record what you believed, what happened, why, and what you will do differently next time?

    • Where does this live so others can find it and learn from it?

    How it is reused

    • When a similar situation returns, how do people discover prior reasoning without asking veterans in DMs?

    • What decision has been made twice in the last year that should have been made better the second time?

    Role of AI (assist, not replace)

    • Where can AI surface patterns across decisions you would not see on your own?

    • Where must a human close the loop by owning the interpretation of the AI’s response and any subsequent change in behaviour?

    Cadence and care

    • What is your minimum viable ritual for reviewing bets: frequency, owners, outputs, stakeholder buy-in?

    • Who is responsible for keeping this loop happening and healthy?

2. Recognise and Own the Moments where Judgment Matters

Most organisations are simply not good at making decisions, especially the hard decisions. Most people and leaders conflate consensus with getting clarity, even if it is uncomfortable. They mistake escalation patterns in the company for accountability. They pay lip service to ‘empowerment’ while every hard choice is quietly pushed upward by their teams. The separation of “business” from product and technology makes it worse, as decisions are made about each other rather than with each other. What often begins as good intent turns into equal measures of order-taking and decision avoidance.

Not every choice deserves deep deliberation. A few do. Those are the ones where rigorous framing matters, where trade-offs expose critical values, where “good enough” is not good enough. The discipline is to know which is which and to recognise the difference between two-way doors you can walk back through and one-way doors where the consequences stick. Most teams waste time over-analysing reversible decisions while doing their best to avoid facing the irreversible, and usually existential, ones. Knowing the difference and choosing the hard decisions is half the craft of good judgment.

As people, teams and companies, we can do better by calling out those moments. Map the handful where judgment quality truly determines outcomes, and protect them. Make time for deep work and think-out-loud debate and reflection, even when the tools might produce plausible answers faster. Really invite dissent. Do not talk about ‘safe spaces’ while behaviours, politics and culture quietly suppress it. Make alternative views easy and cheap to express. Treat these moments as practice, not threats or ceremony. 

As the capabilities of AI tools and their uses continue to evolve, so will the nature of the human decisions around them. What felt strategic yesterday may become operational tomorrow. Revisit your decisions and those tidy ‘vision,’ ‘mission,’ and ‘strategy’ documents often. Treat them as living artifacts that you never got right in the first place. Ask what has become newly strategic, and what is newly automatable and where human judgement and taste now create the most leverage.

Good judgment is not an innate trait or some special wiring in a founder or CxO’s brain.  It is most useful as a whole organizational muscle. The companies that build it deliberately will outlearn and outlast those who are still waiting for certainty.

  • DescriptioGood judgment is proportional - i.e. slow down for the few decisions that set direction and move fast on the rest.

    Recognise the moments

    • Which decisions are one-way doors (hard to reverse), versus two-way (relatively easy to revisit), or just a set of trade-offs over time?

    • What are the five decisions this quarter where high-quality thinking will determine specific customer outcomes?

    Ownership and trust

    • Who (person or team) is best placed to own each call, and do they actually have the mandate?

    • What would make stakeholders trust the deciders without constant escalation?

    Framing the decision

    • What are we solving, what would make us wrong, and what constraints matter most?

    • What evidence would change our mind before and after the decision?

    Dissent and alternatives

    • How can an alternative view be raised cheaply and safely without it being received as a challenge?

    • Which credible option are we not choosing, and why?

    Tempo and transparency

    • What is the right speed for this decision, given its reversibility versus its impact?

    • How will we make the reasoning visible so others can learn from it?

    Review and renewal

    • When will we revisit this decision explicitly, and what signal will trigger a rethink?

    • After the outcome, how will we capture what we learned into a decision library others will actually use?n text goes here

3. Invest in Creative Capital

AI is superb at exploring adjacent possibilities. It can remix, extend, and optimize what already exists. OK, sure, it can hallucinate liberally as well. But what it cannot yet do, at least not meaningfully, is insist on what does not yet exist but should. ‘Creative capital’ is the capacity to make those leaps and to ‘see around corners’, to connect dots across boundaries, and to design products and systems that make room for surprise (or ‘delight’ if you prefer).

In teams and companies, this capability lives in more places than people want to admit. It is in the discipline of engineers who make architectural choices that create ‘future-proof’ systems. In the designers who create coherence across user touchpoints, when every team is myopic about their own features. It’s in the product people who can translate a vague complaint by a customer into a problem worth solving. And in the leaders who can see constraints as fuel for originality, instead of limitations.

This kind of creativity is not decorative or always visible. It is usually an innate sense of direction that shapes what you choose to build, what you are willing to stop building, and how you balance ambition with responsibility. It is what turns scattered effort and ad hoc ‘luck’ into continuous growth and advantage.

It rarely shows up in quarterly metrics because its payoff is typically structural and not tactical. For example, the architectural decisions that preserve optionality, the design system that scales across teams, the user story that gives meaning to a roadmap, these are lagging, slow-return investments that. Over time, they separate products that feel ‘created’ from those that merely feel ‘generated’.

AI can certainly help if used with this intent. It can widen the aperture of imagination and reveal non-obvious connections. It can generate future scenarios to debate, surface constraints earlier, and help teams test numerous hypotheses faster. It can amplify creative exploration and compress the cost of iteration. But equally, without the depth of intentional practice, it quickly devolves into a sycophantic parrot as exemplified by much of AI slop that constitutes social media content and misinformation today. 

That is why creative capital needs deliberate protection at a personal, team and company level. It needs transparent and deliberate investment and capacity to defend it. It's OK for engineers to argue for elegant design even at the expense of a dubious feature. It is OK for designers to follow the thread of research and validation even if it might delay a ship date (which was probably artificial to begin with). It's OK for product teams to explore the outlier problem that does not yet fit the roadmap but might define the next one. 

Creativity is not the opposite of discipline; it is a discipline. It thrives when curiosity is paired with constraint, when reflection is treated as real work, and when the company values imagination not just in its brand and marketing but in its decisions.

  • Creative capital is the collective imagination and judgment your company can bring to bear on new possibilities.
    Use these questions to shape how it lives, how it grows, and to watch for where it might be languishing.

    Where it lives

    • Where in your organisation do new ideas actually start?

    • Who is trusted to question the way things are done? Who is not? Why? 

    • When was the last time a technical or design decision meaningfully changed your product strategy?

    How it grows

    • What deliberate space do you make for curiosity and time to explore problems without a guaranteed output? Is it just the annual ‘hackathon’?

    • When someone takes a creative risk that fails, what happens next?

    • Do your rituals (reviews, planning cycles, performance metrics) reward refinement or merely speed?

    How it languishes

    • Which constraints in your system have become excuses not to try?

    • Where is AI being used to replicate what already works instead of helping you imagine what might?

    • What would happen if you removed one layer of approval between an idea and an experiment?

    Creative capital compounds in the same way trust does, i.e. slowly, through repeated choices.

     It is not about inventing more ideas; it is about nurturing the conditions where better ideas can have a chance to survive, and the best ones can thrive.

Start Where You Are

Most companies, teams or people are not short on ambition. Everyone wants to move faster, build smarter, and adopt the next thing. That is what the story they tell in their slides and off-sites. But the real work of change and transformation starts with a brutally honest look at today’s reality. And the reality is that every organisation, team and person begins from a different place. 

Starting where you are is a bold act of awareness.  It is the most radical thing you can do because it needs you to look at your work and your world as they actually are, without the comfort of corporate party lines or slogans. It calls for the kind of honesty that rarely makes it into company updates or LinkedIn profiles. 

For us as individuals, it begins as a quiet discipline of noticing how we think, decide, and learn. Ask what you understand deeply and what you are still guessing at. Pay attention to where your energy goes and what patterns repeat. The goal is not a performative ode to the latest self-help book or TikTok advice. It is the calm, non-judgemental muscle-building of noticing, changing, failing, and starting again.

Starting where you are is a bold act of awareness. It is the most radical thing you can do...

That is the work that then transmutes to your teams and collaborations. It’s about how you think together. It is about the honesty of exposing how decisions are really made, how disagreements get resolved, and how factual ‘truth’ moves through the group. You don’t need stand-ups and reviews or some other rituals for this. You need the willingness to name the gap between how you believe you work and what actually happens. The simple act of naming it is in itself a change to the system.

For companies, starting where you are means facing the limits of process and the realities of culture. Systems change slowly, if at all; people can learn much more quickly. Start where learning can spread. Model better judgment in one team. Make one feedback loop visible. Protect one piece of creative work that deserves space to grow. Change that begins locally, with integrity, is far more durable than transformations announced from the stage.

AI has a role in this, but it's not a starring role. Use it to see, test assumptions, surface inconsistencies, to question what you think you know, and ask what it tells you. Let it sharpen your awareness rather than replace it. It’s not easy but start small, see clearly, act where you can and keep going from there.


The Human Advantage

What remains uniquely human is not judgment alone, or creativity, or even empathy. It is how we, each of us and as a collective, integrate all three. It is how we hold and unpack complexity, create and make sense of contradictions, and stay accountable (or not) for what follows.

The true human advantage in the age of AI is our ability to care about what we think and why we think it.

As much as I hate saying it, our advantage is not perfection. It is our fragility and our ability to connect experience, imagination, and consequence into choices that still carry meaning. Human intelligence is messy, relational, and shaped by friction. We learn through tension, through slow refinement. Through accidental epiphenies, often from disagreement, errors, and consequences. We improvise meaning before we measure it. We weigh competing truths and sometimes act anyway. These are not design flaws. They are features that might devolve into inertia but also make progress inevitable. That is the work machines have not yet learned and may never need to, despite all the AGI hysteria.

The true human advantage in the age of AI is our ability to care about what we think and why we think it. It's the ability to connect curiosity with conscience and to match intelligence with integrity. AI tools certainly extend power, but it is our values that decide how that power is used.


The Work Ahead

Product work is human long before it is technical.

If there is one thread that has become even clearer through this series, it is that the product work is human long before it is technical. Tools will keep accelerating. Hype cycles will keep rising and collapsing. But the discipline of thinking well, deciding well, and building with care is still ours to practise. None of this changes overnight. There is no finish line to this work. It is a craft shaped through repetition, reflection, and the willingness to stay awake to what is actually happening inside your teams and your company.

This is the quiet philosophy of why I started Rewise. I want to stand alongside the teams in the work that matters and help them to sharpen how they think and decide, help leaders shape the conditions where good judgment and creative confidence can compound, and do it through embedded, deliberate practice that adapts to context rather than imposing a prefabricated model. Capability grows from the inside out, through people who begin, as all change does, by noticing differently and having permission to say the quiet parts out loud.

AI will level the playing field of tools. It will not level the playing field of wisdom. The real differentiator, for individuals, teams, and companies, is how deeply they invest in cultivating the judgment, imagination, and courage to learn and use these tools well. That is where the advantage will come from, not from racing to be ‘AI’, but from learning to integrate it to deepen thinking.

We are early in a long arc. Much of what matters most will take years to understand, let alone master. But arcs bend through small, consistent acts of clarity and care, through people who choose to build with intention and who remember that tools extend power but values decide how that power is used.

If there is work ahead, it is this:

  • To stay awake.

  • To keep learning.

  • To build with intention.

  • And to leave the place better than we found it.


Next
Next

From Mirror to Movement: Evolving Your Operating Models for the AI Age