AI in Product Leadership

The AI-Augmented Leader: Using Machines to Lead More Humanly

The AI-Augmented Leader: Using Machines to Lead More Humanly

Ethan Mollick and his team dropped some extremely important research over the weekend. They studied 776 professionals at Procter & Gamble and found something I wasn’t surprised by: a single person using AI performed at the level of a traditional two-person team.

What did surprise me? People felt better while doing it. They felt more energized, less anxious.

This research starts to confirm what many have been saying about AI augmenting human capability. With the help of AI tools, people can work across traditional boundaries. In the study, technical specialists proposed commercially viable ideas, and business leaders spotted implementation challenges earlier. We’re starting to see what happens when everyone has an intelligent collaborator to help close their knowledge gaps.

The timing of this research is impeccable. Recently I’ve been exploring how AI might reshape product management and leadership, trying to make sense of where this all leads.

At this point, it’s clear that AI can handle specific tasks and communication loops. The more interesting question lives a level above that: What does leadership look like when AI is embedded in every part of the product workflow?

I’m beginning to see product leadership evolve toward a new direction, more focused on designing environments where good decisions emerge naturally. If AI becomes part of our everyday toolkit, we’ll need to rethink how we lead teams, shape strategy, and build trust.

From Overload to Clarity — Creating Space for Strategic Thinking

The cognitive load for product leaders today is brutal. We’re managing Slack notifications, grooming backlogs, reviewing roadmaps, and fielding customer escalations, sometimes in the same hour. A leader’s ability to think clearly about product strategy is compromised by constantly reacting to the next fire. I remember feeling this in my last role, struggling to find space to stand back and connect important dots across our product portfolio.

AI might change this dynamic in ways I hadn’t considered. Beyond handling specific tasks, it looks like it can help absorb some of the overwhelming cognitive burden that prevents deep product thinking. Imagine AI organizing research data, backlog items, and customer feedback into cohesive roadmap themes, maintaining the context and depth of detail we naturally lose as humans juggling across multiple priorities.

I can imagine a product team at a mid-sized SaaS company exploring something new. They might implement a semantic clustering system for customer feedback using AI. Before this, their product manager might spend days manually sorting through thousands of support tickets and feature requests, constantly worried they might miss something important. With an AI system in place, the PM could discover patterns they’d completely missed, like a specific subset of enterprise customers struggling with a self-service workflow that gets smoothed out in aggregate metrics. This could lead to targeted solutions that significantly improve adoption of digital tools among high-value accounts.

Many product leaders might be missing the deeper opportunity here. AI isn’t just an efficiency play. When AI offloads some of the detail tracking and information synthesis, you might regain the mental space needed for real strategic product thinking.

Shep Bryan captures this shift well: “We’re no longer managing around our limits. We’re leading with new ones.” Product organizations that pull ahead probably won’t just use AI to prioritize backlogs faster. They might use it to fundamentally rethink what they build and why.

Cognitive Synergy — Expanding Product Thinking

Product leaders develop mental models after years of experience. I’ve noticed that the frameworks that helped me make faster decisions in the past sometimes become limitations now.

Take prioritization frameworks. Most of us develop reliable methods for evaluating what to build next. We create weighted criteria, score opportunities, and compare numbers. But these frameworks often reflect our existing biases and blind spots. They help us move faster but rarely challenge our assumptions about what matters.

AI offers something more valuable than just automation. It creates what researchers call “cognitive synergy”, the ability to explore product challenges in new ways and at scales I couldn’t manage before.

Most product leaders will likely use AI to speed up their existing thought patterns. The visionary ones might use it to completely expand their thinking. You could map how roadmap changes affect multiple systems, run dozens of customer segmentation scenarios simultaneously, or explore feature interactions across your product ecosystem.

The “Cybernetic Teammate” study by Ethan Mollick and colleagues by Ethan Mollick and colleagues backs this up. Their research suggests AI might be reshaping how product teams operate and how expertise functions. The boundaries between product, design, and engineering expertise seem to blur, with less experienced team members performing at levels comparable to seasoned professionals when using AI.

That creates a new leadership challenge. When expertise boundaries fade and performance gaps narrow, how do you lead? What unique value might you provide as a product leader?

I’m rethinking my own leadership approach with that in mind. I’m looking for ways to design around this kind of human-AI synergy:

1. Interrogating assumptions. Using AI to systematically challenge the beliefs and patterns embedded in our roadmaps and priorities. After 20+ years in product, I know I have blind spots. AI might help surface patterns in market shifts or customer behavior that I’m too close to see.

2. Routing decisions. Finding the right balance between AI-generated synthesis and human judgment. Some product calls don’t need human review. Others, especially the ones involving ethics or creative leaps, absolutely do. The challenge is designing systems that elevate the right decisions to human attention.

3. Building feedback loops. Making sure the AI systems themselves evolve appropriately. Without checks, they’ll reinforce existing biases or optimize for what’s easy to measure instead of what actually matters to customers.

The advantage isn’t in who has the most AI. It’s in who builds the best human-AI thinking system. That’s the product leadership shift I’m watching, and the one I want to be ready for.

Redesigning the Product Leadership Stack

Building product leadership takes clear blueprints. In an AI-augmented world, it’s not optional. I can think of five critical product design and development systems that need to be looked at:

Strategy Design
Most product leaders rely on annual planning cycles, competitive analyses, and roadmaps built from stakeholder priorities and gut instinct. AI augmentation might enable continuous scenario modeling, pattern detection across user behavior, and identification of market opportunities that get missed without deeper analysis. The human part of the job becomes about values, judgment, and the courage to pursue unconventional product directions when they align with deeper customer needs.

Continuous Empathy
Instead of occasional user interviews and periodic NPS reviews, AI could provide continuous sentiment analysis and automated detection of user friction points. This approach doesn’t replace human empathy, it might help you direct empathy where it matters most.

AI can help identify patterns at scale, but there’s no substitute for direct human conversations with customers to understand their deeper needs and mental models. The most effective product leaders will likely use AI to determine where to focus their human attention, not as a replacement for genuine connection.

Adaptive Storytelling
Product vision traditionally gets written once, condensed to a PPT slide, and inconsistently communicated across teams. AI-augmented approaches might continuously translate vision into contexts that resonate with different teams while maintaining core principles. This could close the gap between what product leaders intend and what teams understand.

Conflict Resolution
Tension in product teams is commonplace, and even necessary, but only if understood and addressed in a timely manner. When left to fester, it builds gradually through missed feedback, vague comments in roadmap reviews, and silence in collaborative meetings. By the time it surfaces to product leaders, the damage to the team is harder to repair.

AI might help us notice these signals earlier. It can track patterns in communication or flag when collaboration slows across functions. That kind of awareness gives leaders the chance to step in sooner, with better context and a clearer read on what’s happening. AI isn’t going to have the hard conversation for you. But it might tell you when one is overdue.

This redesigned product leadership stack focuses on enhancing human capability and connection, not replacing it. Success might mean more humanity in our product development process, not just more efficiency.

Product Ethics as Strategic Advantage

When AI becomes a part of the product decision-making process, those that apply ethical principles and guidelines may be the ones that differentiate from the competition. Winning trust will be essential to get both the team, the business, and the customer to accept the help of AI.

Gaining this trust might require three leadership roles that technology can’t replace, as Izabela Lundberg articulates in her human-centered AI leadership framework:

Visionary Guide
Product leaders might need to define success beyond metrics, clarifying values and principles that govern how AI serves human needs. Lundberg explains that leaders must “articulate a clear vision for how AI will support organizational goals while staying true to human values like trust.” If you can’t clearly state which values you won’t compromise for efficiency, you might already be surrendering your most important leadership function.

Ethical Guardian
Product systems probably need transparency, fairness, and oversight built in from the start. Lundberg, echoing Uncle Ben, reminds us that “with great power comes great responsibility”. Lundberg continues “Leaders must ensure that AI is developed and deployed ethically.” We will need to focus on how AI decision architecture is designed before its implemented (iteration can be dangerous here). For example, how will you ensure AI-powered feature prioritization doesn’t systematically undervalue accessibility needs or niche customer segments with less prevalent data?

Human Advocate
Every AI implementation must consider its impact on human dignity, capability, and inclusion. Lundberg calls on leaders to “prioritize, protect and preserve the human experience.” Ask yourself whether your AI implementation expands human potential or diminishes it, whether it concentrates power or distributes it.

Cognitive offloading comes with a hidden risk, it silently shifts moral responsibility and accountability. When AI helps prioritize features, summarize customer feedback, or analyze user behavior, it makes value judgments based on its training data, not necessarily your organization’s values, or human ethics.

If product leaders embrace AI for efficiency without considering these implications, they create tomorrow’s ethical debt today. Successful product leaders will likely design systems with ethics as infrastructure, not afterthought.

Strategic advantage might go to product leaders who design systems that augment human judgment rather than replace it, especially for ethically complex decisions.

What’s Left Is Where Product Leadership Lives

Product leaders need to face facts: AI simply can’t handle your values work, build genuine customer empathy, or navigate difficult conversations. These are fundamentally human capabilities. The key is designing systems where people and machines each do what they do best.

AI’s transformation of product leadership is already happening. The challenge now is shaping that change to enhance what makes leadership human rather than replacing it. This won’t happen accidentally, only through deliberate choices about what we automate and what we protect as uniquely human territory.

As I plan my next chapter in product leadership, I keep returning to this question: What core leadership elements will always remain human territory, no matter how advanced our AI tools become? The answer seems clear: authentic purpose, ethical judgment, creative intuition, and genuine human connection with both customers and teams.

These qualities form the core of what product leadership might become, the uniquely human elements in an increasingly augmented world. Organizations that thrive might not have the most advanced AI, but they might have designed AI systems that account for and amplify essentially human capabilities in their product development.