The New Product Leadership: Designing for People, Machines, and Uncertainty
I’m one of tens of thousands of experienced, capable product people now on the outside looking in on the current job market. I’ve been fortunate in my career to avoid some of the harsh downturns of the dot-com bust, the financial crisis, even the COVID lockdown. But now I’m in it.
Today’s economic uncertainty is exacerbating a correction that was already happening in the product management space. Massive over-hiring at the end of the last decade has led to an oversaturated market, and there just aren’t enough jobs for all the people with the skills. That’s a lot of macro-economic forces at play, but I don’t think any of them will bring the long-term impact AI will.
Over the last month, I’ve been trying to look ahead and prepare myself for the future of product leadership so I can bring that vision to my next role. After writing several articles exploring different aspects where AI and product leadership intersect, I’ve come to find there is a shape to the future. It isn’t so dramatic as “Does my job still exist?” but instead “What does leadership look like now that AI is part of the team?”
AI is inevitable. It’s quickly embedding into how most knowledge work gets done, shifting the balance from what was predominantly human-led to a mix of human and system-driven work.
No, AI won’t replace product leadership. But it will transform what leadership means and how those leaders provide value. If it hasn’t already, AI will soon start to handle tasks that once consumed much of our day. That shift begs the question: What part of your work still needs you, and how do you lead a team that includes this new intelligent, slightly alien system as a contributor?
What I’ve Learned: A Pattern Across Seven Articles
Over the past few months, I’ve been writing about different aspects of this transformation. Each article explored a facet of how AI is reshaping product leadership. Together, they’ve revealed a broader pattern: AI is taking on more of the coordination, communication, execution, and routine decision-making that once defined much of the role.
That’s potentially 80% of the time and attention we used to spend as leaders. But we don’t suddenly get that time “back” to focus on strategy. The reality is more complex, and more transformative.
Here’s what I now see changing:
- Traditional hierarchies are shifting. AI is starting to take on coordination work that once justified layers of management. This could mean fewer product leadership roles overall, or at least different shapes to those roles.
- Execution is decentralizing. Leaders aren’t just managing people anymore, they’re increasingly guiding a mix of human contributors and system-led processes. This still requires leadership, but of a different kind: shaping boundaries, setting context, and designing for alignment.
- AI automates rituals, but struggles with nuance. It can summarize standups, write recaps, even propose roadmaps. But the deeper work, setting vision, making ethical tradeoffs, reconciling ambiguity, remains deeply human.
- Your impact may be felt more through systems than titles. The design of the environments where decisions get made (data flows, prioritization frameworks, team rituals) might matter more than where you sit on the org chart.
- Strategic space expands, if you choose to use it. AI can accelerate research and help surface insights and patterns you might have missed or would have taken too long to help make a decision. But using this new ability means elevating your focus from “What should we ship next?” to “What problems are truly worth solving now?”
- Behavior emerges from invisible layers. In AI-integrated environments, some product decisions are made by the system itself, ranking, optimizing, responding in real time. Product leadership involves more than direction-setting, it includes shaping the conditions that influence how systems evolve.
- Control is an illusion. Empowered teams and people don’t respond well to top-down direction. Ironically neither do AI systems (for different reasons). More than ever, leadership means working through stewardship, feedback loops, and a willingness to learn from what emerges, for both humans and systems.
The entire field of product is shifting. The real work now is understanding what’s needed from us to lead it effectively.
AI Is Not Deterministic — And That Changes Everything
Before we go further, a quick note for context. Many teams still miscast AI as a deterministic problem-solver, expecting fixed outputs from fixed inputs. But AI is probabilistic: it offers percentage-based predictions, not guarantees. It can answer your question, but it can’t promise it’s right. It can only give you what it thinks is likely to be right.
This has a real impact on how we design systems, manage trust, and set expectations, especially with the people using our products. Product leaders need to understand how to work with AI’s probabilistic nature, and use it appropriately when building solutions.
Playing to AI’s strengths (communication, translation, organization, and task execution) while putting the right guardrails in place is how we support trust, accountability, and safe outcomes.
For product leaders used to managing for certainty, this is a significant mental shift. We can’t control exactly what the system will do. We can only shape the conditions in which it behaves well.
How to Lead When AI Is Part of the Team
As AI systems become embedded in more workflows, product leaders are now responsible for integrated teams, made up of both people and intelligent systems. But these systems operate very differently than human teammates.
They don’t have values, intent, or context. They don’t explain their reasoning or ask for clarification. They don’t notice misalignment unless we design them to. And when things go wrong, they don’t own the outcome. We do.
Beyond being a legal and ethical responsibility, it’s a leadership challenge we must take seriously.
Leading AI within a team means shaping how it works, setting boundaries, and designing feedback into the system. AI isn’t an employee, but you are responsible for what it does, where it fits, and how its influence scales.
That’s what makes this both a technical and leadership challenge. You need to understand what AI can do, but you also need to guide it toward outcomes that reflect your team’s intent and your organization’s values.
Five Practices for Leading Hybrid Teams (Humans + AI)
So you’ve got a new team, one made up of both people and AI. How do you move forward? How do you create the right dynamic? And how is this all going to work?
You’re now leading in an environment where AI handles tasks like coordination, reporting, prioritization, knowledge synthesis, and interface generation. From meeting summaries to roadmap updates to competitive analysis, if it follows a repeatable pattern, AI is probably already doing it.
But that doesn’t change some of the basic tenets of your job as a product leader. You’re still responsible for everything that influences outcomes, especially the parts AI can’t interpret clearly:
- Making judgment calls where priorities conflict
- Creating shared understanding
- Holding trust
- Navigating ambiguity
- Taking responsibility for outcomes, even when the system played a role
Where do you start? Here are a few ideas:
1. Train and Supervise the System
Just as you wouldn’t hand important decisions to your most junior team member without guidance, you shouldn’t delegate to AI without proper supervision.
This means getting involved in prompt design, confidence tuning, and human-in-the-loop validation. It means understanding the strengths and weaknesses of your AI systems, and doing your best to create workflows that leverage the former while mitigating the latter.
When we built our client identity system at Spectrum, we faced a fundamental challenge with data quality. Our AI matching system could never get a high enough confidence to operationalize the system on its own. That meant we needed to build a team to handle what the system couldn’t match. The leadership challenge was understanding that the business would never accept a solution that wasn’t at least close to 100% correct, meaning the full design required a workflow and ecosystem where humans and machines worked together to make that happen.
2. Design Feedback Systems, Not Control Systems
AI needs feedback loops just like teams do. It’s how you “coach” the system.
In deterministic systems, we design control points, places where we specify exactly what happens under certain conditions. In probabilistic systems, feedback replaces predictability. We can’t control precisely what the system does, but we can measure outcomes and provide more context to improve its ability.
Start by identifying the key metrics that matter for your AI system. Start with accuracy, and then try to come up with measures for concepts like fairness, timeliness, user satisfaction, and business impact. Then build the workflow to regularly review these metrics and adjust the system accordingly.
For example, with generative AI components, consider implementing:
- User feedback collection directly in the interface
- Regular review of edge cases and failures
- A/B testing of different tuning parameters
- Human validation for high-stakes outputs
3. Redistribute Authority with Intent
The best human leaders know when to delegate and when to step in. The same principle applies when working with AI. Define clearly where AI can make autonomous decisions and where human judgment is required.
Just like with people and teams, defining clear decision rights helps clarify accountability, for both humans and AI. Document what decisions get delegated to AI, but also why those particular choices align with your values and strategy. That’s the kind of context both people and AI need to be successful.
4. Narrate Meaning and Deepen Strategic Thinking
Numbers and charts alone usually don’t inspire action or alignment. But the story of what those numbers mean does. As AI takes over more measurement and reporting functions, human leaders must step up their ability to provide context and meaning.
This means connecting metrics to mission, explaining why certain outcomes matter, and helping teams understand the nuance behind what the dashboards say. It means being deliberate about the stories you tell and the frameworks you use to make sense of complex situations.
If AI is handling most of the routine work, your attention naturally shifts to what only you can do. Dive in. Spend more time with customers, explore ethical implications more thoroughly, and create space for the kind of strategic thinking that often gets sacrificed in a hectic business environment today.
5. Trust, but Verify: Redefining the Trust Model for AI
Human leaders apply a similar principle to people: trust over time, based on behavior. With AI, trust must be designed proactively: confidence scores, escalation thresholds, explainability tools.
You cannot “feel out” trust with AI, you must instrument it. Verification and validation become the operational layer of trust: monitoring, testing, shadowing.
Because AI is probabilistic, trust must be dynamic, contextual, and observable. It’s a lot like onboarding a junior teammate, with one key difference: this one never stops working, and never really understands why.
These five ideas are far from exhaustive, but they do show that a new kind of product leadership is required in the future, and we need to start figuring that out now.
What Comes Next Won’t be Clean
Most existing businesses and cultures will probably resist these shifts. Control, status, and visibility are still very real currencies of success in many work cultures. These leaders will shy away from the need to reframe their value and reinvest in the things their teams need.
I feel this resistance myself. After decades building my career on a certain model of leadership, it’s hard to let go. But holding onto old ways of working won’t stop the change, it will just leave us behind.
Change is hard for everyone, including your team. I’m planning to start small. Choose one workflow. Define the AI’s role. Shadow it. Reflect. Adjust. Build my own confidence in the system, and in this new way of working, and in a new kind of leadership.
This Map Isn’t Final
What I’ve laid out here reflects what I see now today, but the landscape is shifting quickly. Tomorrow’s models will evolve. New tools will emerge. Entire categories of work may look different within a year.
Still, the approach is sound. Prioritizing adaptability over rigidity, feedback over certainty, stewardship over command, these are the muscles we’ll need to exercise as we evolve alongside intelligent systems.
And this isn’t just for product leaders. I hope this helps anyone navigating how to lead when the team includes intelligence that doesn’t think, or learn, like we do.