From Mirror to Movement: Evolving Your Operating Models for the AI Age
Rediscovering Product Craft in the Age of AI – Part 4/5
If you have stayed with me this far, you may have already ‘looked into the mirror’. In Part III, we saw how AI reflects and exposes how we actually work, where it replaces, where it accelerates, and where it can genuinely augment. Some of that is energising and some of it is awkward, if not embarrassing. The question is not whether the reflection is good or bad or flattering. The real question is, what are we going to do with what we are seeing?
This is not a rallying cry for another transformation programme with a heroic name, slide template, steering committee and a budget line. Those have their place. I am after a different kind of intention that is built from small, repeatable behaviours that change how we notice, decide, and learn every day, as individuals, as teams, and ultimately as organizations.
The shift here is not about “fixing what's broken”. It’s about waking up to what’s already true and choosing to practise our craft, product thinking, operating model, and company-building, with conscious intent.
TL;DR: This chapter moves from mirror to movement. Progress in the AI age will not come from transformation programs. It will come from intentional practice at three levels: personal, team, and organizational. Start where you have agency, strengthen judgment, close your learning loops, and build creative coherence. Use AI to widen the lens and reduce the busywork, then spend the saved time on the human choices that compound.
Your Personal Operating Model
Let me start with the uncomfortable bit… your personal operating model is the ceiling on the impact of your life, career and happiness. Not your title, not the headcount on your team, not the number of frameworks you can recite. It is absolutely not your job description either. It’s the way you notice, decide, communicate, and learn, over and over, that ultimately shapes your career, your products, and the way people experience working with you. In a world where AI levels the functional playing field, this is the differentiator. The tools are available to everyone, but the way you use them is not.
And fortunately, your operating model is also not something static or predefined, genetics notwithstanding. Mine has been evolving over the years. But I have been trying to make it less accidental. As a small example, lately, before I touch any AI tool to search, write or code, I start with pen and paper to write down exactly what problem I am trying to solve, my starting hypothesis, and the kind of thinking I want to do. Do I want a quick answer, am I trying to learn something, or am I testing for blind spots? Some days, that really sharpens my focus. On other days, it exposes the kind of fuzziness and laziness I would rather ignore. Of course, I still use AI afterwards, often to test the opposite view or to go beyond my first idea. But the decision about where to go next and which path to take is always my own.
“The tools are available to everyone, but the way you use them is not.”
Good judgment is not a gift or a matter of luck. It is a repeatable rhythm to pause, learn, reframe, choose, observe, and adjust. The choosing is often the hard part. My procrastinating, analysis paralysis self loves a sycophantic AI that confidently keeps giving me more options without ever forcing a trade-off. I find it helps to timebox the decision, decide if it’s a one-way or two-way door, ask the model to test the opposite path, and then commit. Just make a call, experiment fast and then close the loop later. The lesson here is that without the look-back, experience is just a stack of things that happened. The people who seem to have ‘good instincts’ usually have a quiet habit of returning to decisions and asking what they believed, what actually happened, and what they missed. AI can shorten the distance between decision and feedback, but it can’t care on your behalf. The caring is the work.
On creativity, I am wary of pretending to have a grand method. Real creativity, in product, engineering, and design, shows up when we give it attention and space. It’s the engineer arguing for an approach that keeps us flexible next quarter. It’s the designer holding a line so the customer journey hangs together. It’s a product person noticing the pattern in a handful of messy conversations that doesn’t show up in usage data. Often, it’s marketing or sales spotting the signal first, if we are listening. Respecting that work means noticing when it’s happening, using data and instinct together, and resisting the temptation to overlook the interesting ‘anomalies’ because a generated alternative looks more “finished.” Coherence and taste beat cleverness every time..
Another key unlock for your operating model is understanding principles beyond processes. I see people and teams struggle with this because copying processes and rituals that worked elsewhere is an easy hack. It's far more useful to ask, in plain language, what principle am I serving with a decision, and what second-order effect am I inviting if it “works.” Will this make us more flexible or more brittle? Which bias is most likely to be steering me? Is it recency, sunk cost, confirmation, or something more personal? And how would I test against it? None of this needs a lot of ceremony and a lot of alignment meetings. It’s usually about just slowing down long enough to think, being self-aware and honest and then letting tools carry the parts that don’t need judgment.
This is not an argument for going slow, speed and velocity matter. So, use AI to widen the lens and deliberately compress the busywork. Then, let's spend the time saved on the human bits that compound and grow, better framing, better conversations, better standards. Design your own version and your optimal operating model. Like any habit, start wth noticing, work on the small things, persevere and then let it evolve with you. The point is not to follow someone else’s morning routine or eating habits. It’s to discover and create ones that keep you awake at the wheel.
In the end, when everyone has access to the same capabilities, your edge is the quality of your operating model. It's how you think with the tools. It’s your rhythm for good judgment. It’s continuously learning. It’s about understanding and respecting creativity and craft across functions. That’s the part AI cannot do for you.
Your Team Operating Model
Think about the best team you have ever been part of. It may not have been the highest-profile or the most resourced, but it felt right. You might remember the easy conversations, the way disagreements and friction didn’t bruise your ego but sharpened the work. Where hallway and water-cooler decisions just got made, and it all connected to something larger than the next sprint. I have been lucky enough to be in a few of those, and I don’t remember any of the tools or ceremonies. What I remember are the people, the shared centre of purpose and a quiet, collective sense of why we were here and how we would always know if we were progressing.
Even as I write this, I know a product–design–engineering trio at a large enterprise with a consumer-centric product that is punching well above their pay grade and adding real ARR because their heads are aligned before their hands move. They have wildly different years of experience but respect each other, define problems together, ideate with honesty and without ego, hold each other accountable and cover for each other when the work gets messy. Every conversation is a ‘yes, and’ exchange and rapid iterations of what to keep, what to try and what to drop. They use tools and AI to explore design options, testing and pressure-testing narratives, but these are amplifiers of a coherence in the way they operate.
I have also worked with teams at the other end of the spectrum, faced with existential platform transformation challenges. New leadership, new ideas, deep domain experts, siloed old-school playbooks and historical baggage - i.e. all primed for disruption and change but hobbled by inertia. Without a shared and credible vision, acknowledgement of current realities, and without well-aligned expectations, the friction multiplies. Everyone is working hard. But progress is hard to measure without a cohesive agreement on what and how. Nothing is technically wrong. Everything is emotionally off. This is not a tooling deficit that AI can solve, it's an operating model problem that needs human work to unlock the full potential of the teams.
“Strong teams in the age of AI decide, together, where human judgment lives, and let the tools accelerate everything else.”
Coherence, or call it alignment if you want, is what makes all the difference. It's not aesthetic sameness, but a lived sense that the narrative, the architecture, and the choices belong together. Teams get there by building their own language for progress: what success looks like for the mission at hand, which trade-offs they will make on purpose, what they refuse to mortgage for short-term speed, and how they want adjacent and dependent teams to experience working with them. Agile rituals can help, but only if they serve that team’s shared language rather than as a replacement for it. The most effective teams I have seen don’t borrow an operating model, they shape their own that fits their talent, their context, and their dependencies. And they keep reshaping it as the parameters inevitably change.
AI is, of course, changing the nature of the work people do, and we are yet to realize how those shifts will evolve fully. But it does not change the underlying principles. As the AI tools shorten cycles and blur responsibilities, teams have to decide together where human judgment sits in the loop. Who frames the problem before the generator starts? What boundaries keep local optimization from breaking the system? When does a model suggest a “good enough” path? Whose taste decides to push for more? These are collective choices, not prompts. When a team recognizes and acts on those choices explicitly, the tools make them faster and sharper. Real skill, judgment, and creativity get amplified instead of averaged out.
You will notice that what’s happening at the team level is the same arc as the personal operating model - i.e. going from unconscious habit to intentional practice. A team’s operating model is, in many ways, the multiple of its members’ personal operating models, made visible. When it works, it pushes upward, and leaders see cleaner trade-offs, tighter stories, fewer surprises, and the organization starts to flow with it. This is another reason why large-scale transformations often hinge on effective pilots. They provide proof that a different way of thinking can produce different outcomes.
Strong teams in the age of AI are not the ones who vibe code the most or tinker with their ‘second brains’. They are the ones that decide, together, where human judgment lives, and orchestrate the tools to accelerate everything else, and hold themselves to a standard of coherence that their colleagues and customers can feel. That’s the kind of speed that compounds.
Your Company’s Operating Model
We have talked about the individual and the team; now comes the environment they inhabit. A company’s operating model is what you reward, where decisions actually live, and which behaviours leaders model and make contagious. The ideas are not new. What is new is that AI changes the tools, jobs, and economics of work, as well as the expectations of value. It makes speed cheap and puts a premium on judgment, creativity and coherence. Get the mix right and you will compound advantage. Get it wrong and you will drift faster into irrelevance.
“Culture is the soil that nourishes organizations.”
Let's start with culture first, because it’s the soil that nourishes organizations. If you reward motion theatre with beautiful slides, green-washed dashboards and roadmaps stuffed to the gills, AI will simply turbocharge the gloss and call it productivity. If you reward clarity, curiosity, and outcomes worth their complexity, the tools will accelerate learning and sharpen judgment. Too many organizations are in a frenzy to measure AI usage like prompts per day, commits per hour, while the system thinking quietly rots. My humble suggestion: celebrate AI-driven pre-mortems that avoided waste, celebrate ‘good kills’ of bad ideas quickly tested by AI-generated prototypes, write tighter stories from AI-assisted synthesis, and celebrate architectural choices that are future-proofed because agents tested the scaffolding. Data is simply a witness and an unreliable one in the wrong minds. Instead, what you celebrate scales. So put AI to work and create more of the right moments to celebrate.
“Organizational structures, formal and informal, are where decisions live.”
Organization structures, the formal and informal ones, are where decisions live, and Conway’s Law never takes a vacation. However, it's too easy to buy into the trope that ‘architecture mirrors the org chart’, as destiny. I have seen excellent systems built across distance and complexity, but only when the people working on the seams actually talk. Concepts of ‘autonomy’ and ‘empowerment’ are also amplifiers, but only with rigorous alignment. Get clear on the value you are providing, earn it with a demonstrable delivery, then go faster. In complex systems, the seams matter more than the boxes and teams that talk at the boundaries ship coherently. Teams that optimise local roadmaps and business collide at dependencies. AI copilots and tools can, and will, be a game changer to improve these connections, to better map coupling, maintain continuous API contract checks, and provide rapid impact previews or changes. And yes, agents will increasingly sit in the structure, treated like junior teammates with objectives and constraints, wired to the same boundary conversations. The simple flow is alignment first, autonomy second, continuous conversation at the seams.
“Leadership is the behavior that people take their cues from”
Leadership is the behaviour that teams and organizations take their cues from. The leaders who matter in this moment are not the ones auditioning to their board with their ‘AI strategy’. They’re thinking deeply about the technology’s impact and modelling how to use it to deepen judgment. They are often the quiet ones who have always been allergic to false certainty, sceptical of static views of work, and clear about the human work of framing, trade-offs, taste and ethics. They are the ones that don’t just ask “Is it on track?”. They want to know “What value are we creating?” “Is it worth the complexity we added?”, “What did we decide not to do?”, “What did we learn?”, “What can we teach”? They fight to protect the few judgment surfaces that matter, have the vision to fund long-term creative capital that no customers will ask for this quarter, and redeploy time and capacity saved by AI into work that compounds like architecture, synthesis and real customer conversations. Cost-cutting may buy a quarter, but learning, experimenting, and coherence buys the right to create a future.
There’s one more company-level reality to call out: the goal posts will keep moving. The line between “AI can handle this” and “humans must handle this” is shifting faster than any policies or governance. This creates two traps: defending obsolete judgment because it’s optically or politically convenient, and abandoning judgment too early by vibe-coding shaky assumptions into autonomous systems. The first makes you slow and wasteful. The second makes you fast in the wrong direction. Now more than ever, mapping that boundary of machine and human work will need to be an ongoing discipline. Where has automation crossed ‘good enough’, and what higher-order work does that free humans to do? As teams redeploy time into architecture, synthesis, real customer conversations, and sharper bets, their capability and ability to compete will grow.
If there’s a single through-line from company to team to individual, it’s this: AI doesn’t remove the need for competent people and high-function teams; it changes what they do and raises the bar for how they do it. Personal practice makes you a better teammate. Strong teams create healthy pressure for better organizational support. Supportive organizations make craft sustainable instead of heroic. A company's operating model either makes that shift possible or it quietly suffocates it under process and optics. Same tools, different soil.
Next: In Part V, we will conclude with thoughts and guidance on designing your version of the next operating model, mapping the human–machine boundary, protecting a few judgment moments, spreading learning across teams, and the minimal rhythms that keep coherence without slowing you down.