AI-Human Partnership

Who Owns It When the AI Is Wrong?

Who Owns It When the AI Is Wrong?

What does the word “governance” mean to you? To me (after I’ve sung “Schoolhouse Rock” in my head) my mind goes toward frameworks that businesses use to “control” their company’s activities. Approved vendors, permissions on my computer, spending limits on lunch while traveling, etc. The assumption is that if you control who gets in, you’ve controlled the outcomes. That works when the thing being governed behaves predictably. It breaks down when it doesn’t.

In AI, that same frame has led to concepts and tools like model cards, bias audits, responsible AI principles, and eval dashboards. Things built to control, document, and measure how AI is built by those creating these models we talk to all day long. These tools and frameworks are created for ML engineers debating architecture choices and ethics committees reviewing training datasets.

But most of us aren’t building models. We’re using tools to solve problems. The governance questions we face aren’t about training data ethics. We’re looking at something simpler: if the AI we’re using to work with customers gets it wrong, who owns that?

That question doesn’t have a good answer yet.

Why Nobody Wants This Job

The companies building these models explicitly disclaim liability for outputs in their terms of service. They’ll tell you the model might be wrong, might hallucinate, might produce harmful content. USE AT YOUR OWN RISK. Meanwhile, companies deploying these tools moved fast. The pressure to implement AI meant the safety and accountability questions often didn’t get asked until something went wrong.

The space between “use at your own risk” and “we’ll figure it out later” is where your customers live. And filling that gap is a dirty job, in the way I’ve used that term before. It’s the type of essential work that most aren’t excited to own.

Compliance thinks AI governance is a technology problem. Technology thinks it’s an operations problem. Operations is trying to hit efficiency targets and doesn’t have time for governance frameworks. Meanwhile, teams are probably already using AI tools that weren’t fully sanctioned (shadow AI deployed because someone needed to hit their numbers and ChatGPT was right there).

Often, the AI is already live before anyone asks who’s accountable. Customers are getting recommendations. Employees are acting on outputs. And now someone wants to know who owns it when things go wrong — but nobody set that up ahead of time.

AI makes this harder because it makes mistakes at machine speed. By the time you notice a pattern, the damage is already spread across hundreds of customer interactions.

But I want to be careful about how I’m framing this. Ethics boards aren’t irrelevant. The work happening in AI labs on model safety, on bias detection, on responsible development practices is incredibly important, if not essential. But ethics boards set guardrails. It’s business leaders that drive the car. You can still crash and do considerable damage within the guardrails if nobody’s steering.

Answering For Your AI

If you’re deploying AI tools in the operational side of your business, governance comes down to three things that sound boring but determine whether you can actually answer for what your AI does.

Let’s use an example: Mrs. Johnson followed a recommendation from the AI on your company’s website — and now she’s got a problem. Maybe it suggested the wrong product, or the wrong service tier, or told her something was covered when it wasn’t. She did what your system told her to do, and now she’s on the phone trying to fix it. What needs to be true for you to handle that well?

Decision rights. Before Mrs. Johnson got that recommendation, someone did the work to set up the AI to make that recommendation. And just because you hand that decision off to an AI doesn’t mean you aren’t still accountable for it. Not a team, not a department, but a living, breathing person. Before signing off on allowing AI to interact directly with customers on the company’s behalf, they have to know what that means.

Outcome accountability. This is the same expectation we have for anyone building digital solutions: anticipate how it could go wrong, and own what happens when it does. Before you deploy AI to customers, you think through what could break. What if it recommends the wrong thing? What if the data it’s pulling is stale? What if it works great for most customers but fails badly for a specific segment? You don’t get to skip that work just because the AI vendor says it’s “94% accurate.” And when something goes wrong that you didn’t anticipate (and it will), you still own fixing it. That’s what accountability means.

Context auditability. Before that person can tell Mrs. Johnson “we made a mistake and here’s how we’re fixing it,” they need to actually understand what happened. When they pull up her case, they can see: the AI thought she was on the Basic plan. She wasn’t. She upgraded three months ago, but the data was stale. Stale data, wrong recommendation. Now you know what broke. That’s context auditability.

None of this requires building your own AI. It requires being intentional about how AI fits into your decision-making architecture.

The Work Ahead

I ended my year-end piece by saying the work of collaborating with systems that can make decisions but can’t answer for them remains ours. This is what that work looks like in practice: the operational question of who owns what when AI is in your decision chain.

The starting points for this exist. Many people (myself included) have written about how humans and AI should collaborate. But now we’re in a time where that type of AI collaboration is shaking out against real operational contexts. That’s what I want to explore next.

The companies that figure this out won’t be the ones with the best ethics statements or the most sophisticated model evaluations. They’ll be the ones that have a clear way to take accountability when the decision went wrong, and a human taking that accountability to fix the problem so it won’t happen again.