AI in Product Leadership

Leading Products You Don’t Fully Control

Leading Products You Don’t Fully Control

Over this series of articles, I’ve been exploring how AI is changing product teams, flattening organizations, creating “Super ICs,” and reshaping leadership. But something has been nagging at me through all of them: how are we supposed to lead the creation of products now that they might behave in ways we can’t fully control?

We’ve always dealt with uncertainty in product work, but AI brings that to a different level. Now we have new systems that act and learn in ways we didn’t specifically program, and sometimes can’t fully explain. The implications for product leadership feel profound and I’m still working through what this means for how we build and guide products.

Until very recently, we designed products so people could use them to solve specific problems. The system followed rules we defined, and the user stayed in control. But something different is happening with AI. Instead of following explicit instructions, these systems answer with probabilities. The AI’s outputs aren’t answers to problems in the traditional sense, but they can feel like it, especially to the user. Users still make the final call on using the information provided, but they’re responding to what feels like another entity, with its own perspective. I think that shift forces us as product leaders to rethink how we approach where responsibility sits for the products we design.

The Systems We’re Now Leading

Most of my product career has involved a familiar pattern, figure out what people need, prioritize features, work with developers to build them, repeat. It was never simple, but the workflow was consistent when solving problems.

Software systems, until very recently, operate within boundaries we define. How a product was intended to be used was mapped and understood from multiple user types, scenarios, and outcomes. But AI introduces a level of unpredictability we haven’t seen before. These new systems recognize patterns across data we feed them and generate outputs that weren’t explicitly programmed. What AI gives back to users is different, every time you ask.

As I explored in my article “Why AI-First Products Require a Different Playbook,” this shift became clear when I led that client data platform project. We knew we needed to create a new dataset, but how we would go about identifying, categorizing, and cataloging that data was a new challenge that forced us to question assumptions at every layer. We had to figure out when the system should be confident enough to make a match, what to do when it wasn’t sure, and how to handle partial information. These weren’t design choices presented on a screen mockup, but they absolutely influenced what customers experienced.

We now had a system that would be central to the customer experience, but we didn’t have a way to ensure it would always be “right”. This presents a new kind of accountability problem in product development, and feels like it requires a different kind of product leadership.

Managing Unpredictable Systems

Since we can’t control how AI will interpret or respond to every interaction, I think we need to look at influencing system behavior through a different lens then we did in the past:

  • Setting thresholds for when the system acts on its own versus when it asks for help
  • Creating good backup plans for when the system is unsure
  • Setting boundaries around what actions are allowed, no matter what
  • Choosing what information the system pays attention to
  • Finding clear ways to show users when the system is uncertain

Here’s a simple example. Showing upcoming appointments to a user usually means grabbing an array of data from a database and organizing that on screen for someone to interact with. Fairly straightforward stuff.

In an AI-powered product, this same task can become far more powerful but the data is far less predictable. The AI might sift through emails, calendar invites, and messages to figure out what counts as an appointment for you. Now that the important product decisions are going beyond the interface, the product manager might start thinking through questions like:

  • How sure should the system be before calling something an appointment?
  • How does it tell you about its confidence level?
  • What context matters when deciding what’s important?
  • How easily can you correct the system when it’s wrong?

How these decisions are made will likely build or break the trust of your user base. And once trust is gone, so is the user.

Accountability in a Three-Way Relationship

When we build AI products, we’re creating a three-way relationship between the product team, the AI system, and the user. The accountability lines get blurry fast.

Traditional products have a clear relationship: we build it, you use it. If it breaks, that’s on us. If you use it wrong, that’s on you. But AI is so convincing, it’s hard to spot it when it’s wrong. That begs the question:

  • Who’s responsible when an AI makes a reasonable but wrong suggestion?
  • How do we make it clear to users what the system can and can’t do reliably?
  • How do we make it clear to users what the system can and can’t do reliably?

Products that handle this well make the capabilities and limitations transparent to their users. They don’t promise perfect answers. What they do is create clear expectations about what users should trust and what they should verify. Feedback mechanisms like the chain of thought conversations shown by reasoning models help users understand why the system made a particular choice, allowing the user to make an informed decision on how to use that answer.

That seems to be the right division of responsibility between user and AI from the start, but that requires forethought and planning.

The Questions That Matter Now

Recently I’ve found myself asking different questions about products than I did earlier in my career. In the past we might ask “what features should we build?” but now focus a level deeper on how the system behaves:

  • Where should this system make decisions on its own, and where should it ask for help?
  • How do we make the system’s confidence level clear to users?
  • What feedback mechanisms will help the system learn from mistakes?
  • How do we balance automation with user control in different contexts?
  • How will we know if the system starts drifting from our intended behavior?

What’s even more challenging is that these questions don’t have universal answers. The right approach depends entirely on context: what the product does, who uses it, and what risks come with getting things wrong. Clearly, a medical diagnosis system needs different thresholds and oversight than a movie recommendation engine.

I think product teams must tackle these questions early to have a chance of building products users actually trust and winning in the future. Teams that jump straight to implementing AI in their products might end up with a new shiny widget, but lose their users and their place in the market.

Embracing a New Kind of Leadership

I don’t have this all figured out yet, but here’s where my thinking is heading.

I’m starting to accept that we might not be able to control every detail of how our products behave, but for most leaders I know this won’t be easy. Creating spaces where outcomes can emerge naturally rather than scripting exactly what happens in every situation is a new space, but maybe that’s part of the new deal we are making with users. We’re handing the keys over and allowing them to choose their own adventure.

Interestingly, this approach reminds me of how good leaders work with teams. Effective leaders don’t micromanage. They set clear values, create supportive environments, and help teams adjust when they start heading in the wrong direction.

This mindset could work well for AI-powered products. Product leaders who succeed might not be those who demand perfect control. Instead, new leaders will be those who understand that creating the right conditions matters more than trying to plan for every possible interaction.

In my final article in this series, I’ll explore what this might mean for our daily work. I’m not offering final answers, just starting points for practices that might fit this new reality.

If our products are learning, our leadership approach should too.