AI-Human Partnership

Beyond the Loop

Beyond the Loop

Every so often during our conversations, Claude genuinely makes me laugh. And it’s not obvious humor, it’s usually sly, sharp, and well-timed wit that is unexpected. The other day I was moving around between a lot of different topics in the same thread and Claude asked “Where’d you come across this? Just curious what rabbit hole you’re in.” It was an offhand comment, but it made me chuckle (because I go in rabbit holes a lot with Claude).

But how is an AI actually humorous? I’m not entirely sure, but I do know that it is shaping its responses partly based on our insanely long conversation history. It pattern-matched to find the right moment to bring it in. That requires building a working relationship with AI (I touched on this last week calling AI a “soft skill”). To do that, you have to look at AI as more than just a tool that needs supervision, or a workflow orchestration engine, but as an entity unto itself. An entity that has its own unique way of processing questions, thoughts, and ideas presented through language, and then deciding how to respond.

Many people park this concept in something called “Human in the Loop”, but that phrase makes it sound like our job is sitting at a checkpoint approving outputs from the AI, and that just doesn’t track with how I work with it (and it’s not how I work with other people either). I work with AI as a partner. It’s similar to working with another person, but not the same (because it has weird and inconsistent tendencies that just don’t map cleanly to human interactions). Where I’ve landed is something closer to a shared workspace where the roles stay fluid, where sometimes I’m driving and sometimes the AI is, and ideas pass between us more cleanly than any checkpoint model would allow.

Human in the Loop Is a Good Place to Start

That said, “human in the loop” really is good advice, especially early on when learning to work with AI. It helped a lot of people (myself included) figure out how to pay attention and be cautious with AI responses. Ethan Mollick has been one of the best voices on this, encouraging people to actually use AI while staying in control of what it produces. And that’s where you should start. When you’re first figuring out how AI works, how it fails, where it hallucinates, the reviewer role makes sense. You’re learning the tool by checking its work. But as you get more comfortable with the AI, the reviewer role starts to feel like a ceiling. You’re spending your energy confirming or rejecting what the AI produced. That’s still your accountability, don’t forget that, but it’s also exhausting and limiting. Advait Sarkar at Microsoft Research put it well when he described the risk of becoming “professional validators of a robot’s opinions.” You end up spending your energy deciding whether the AI got it right when you could be putting that energy toward the work itself.

And the “human in the loop” framing reinforces that model. It implies that if AI is in the process, your job in that process is to be a checkpoint, which can lock you into being the reviewer instead of co-creator. But after time, you build enough context, shared understanding, and judgment about where AI is strong and where it falls short. That gives you the path to building a relationship with the AI that unlocks so much more than working at the bookends.

You Don’t Get Jarvis on Day One

I think I’ve wanted an AI partner from the beginning. From day one I was trying to get AI to act more like Jarvis from Iron Man, or even the computer on Star Trek. I wanted it to understand where I’m coming from when I start a conversation with minimal context or a random anecdote. I wanted it to get from the start of a conversation to real value as quickly as, or even quicker than, my human colleagues. But every time I thought I was getting that level of “understanding” I found I was getting really confident-sounding incomplete or hallucinated answers hiding the fact the AI was taking shortcuts and doing minimal work. Why? Because there’s no way AI can actually function as a partner without enough context about me and how I like to work to even come close to keeping up.

So I started spending time focused on building context that taught the AI how I think and write, developing preferences and memory that carried forward between sessions. It was a long process that required learning the different ways AI stores context and pulls it into each conversation.

At some point I moved beyond just “using the AI well” and found myself settling into a rhythm of “knowing how the work should be divided between us.” And I only got there by just doing it over and over and over, the same way my perfect scrambled eggs came from making them hundreds of times.

I think a lot of people have already taken this same path and are working this way with AI right now. They just call it “human in the loop” because that’s the available vocabulary, or because saying “I have a working partnership with my AI” sounds like something that would get you side-eyed in a meeting. There’s an open-source AI agent called OpenClaw with a massive community, and the people in it have started calling it “raising lobsters” (basically building up memory and context with their AI over time). They’re using relationship language without even thinking about it, and if you asked most of them what they’re doing, they’d probably say “automating my workflow.”

What the Partnership Looks Like in Practice

A few weeks ago I took on a consulting engagement in a domain I’d never worked in (back-office finance operations for a company managing hundreds of entities across multiple states). The client needed to automate workflows that were eating their time (month-end close, cash management reporting, lender compliance packages) and I needed to understand how their systems actually connected before I could structure any of it.

I brought everything to Claude. The client’s business plan, their systems, the problems they needed solved. I knew what kind of consulting engagement this needed to be and how to structure an operational proposal, but I couldn’t have built it alone because I didn’t know this domain well enough to ask the right questions about the workflows, let alone design automation for them.

From there we built the proposal together. I made the strategic calls and built the visuals to explain the structure while Claude handled the document format and kept the architecture coherent. A few days later I’d been working on a completely different thread about evaluating whether AI outputs are actually trustworthy and realized those ideas needed to be in the client’s finance workflows. I brought it back to the proposal thread and Claude wove it in, connecting concepts from one conversation into a deliverable from another.

The client pushed on specifics and wanted different emphasis. I’d figure out what they were actually asking for underneath their feedback and Claude would update the proposal. After a few rounds of revision the proposal landed and we kicked off the engagement.

Judgment Is What Makes It Work

Throughout that work the roles kept shifting, and the proposal was stronger because of it. I couldn’t have built the architecture on my own because I didn’t know the domain well enough, and Claude couldn’t have scoped the engagement or read the client relationship. Because we’d built up enough shared history, I could trust Claude’s contributions in the areas where I didn’t have the expertise and focus my energy on the parts only I could get right. Nobody was “in the loop.” We were both in the work.

What holds a partnership like that together is judgment, and I mean something specific by that: reading a situation and deciding what it needs, based on context only you have, when you’re the one who has to live with the outcome. That’s not some elevated skill. Everyone already does it constantly. The receptionist who decides which call goes straight through to the CEO, the field tech who hears a sound and knows it’s the bearing and not the belt. Judgment is what lets you trust the right parts of the partnership and catch the parts that need you.

In the proposal work, my judgment is what made the trust possible. I knew the right level of formality given the relationship, how much technical architecture the client needed to see versus just trust. Those calls shaped everything Claude built, and because I was confident in them I could trust Claude’s work on the parts I didn’t have the domain knowledge to build myself. When I send that proposal, my name is on it. My relationship with that person, my credibility as someone they’re trusting with their business. The AI produced the drafts, but the moment they left my hands they became mine.

And that’s where the “loop” framing limits what you can build. A reviewer model (generate, check, approve or reject) changes what you invest in. If I’m going to rewrite everything anyway, why bother building the context that let Claude pull from previous conversations? And the work from that other thread on evaluating AI decisions never would have made it in, because in a reviewer model the conversations don’t compound. The partnership made the proposal better than I could have built alone.

Relationships Take Time

The best working relationships in your life didn’t start out there. They got there because you kept showing up, learned how the other person thinks, figured out where to trust them and where to push back. You built up shared context that made the work faster and the results better over time. Working with AI is the same way.

The proposal I built with Claude was only possible because of months of context built across conversations that had nothing to do with that client’s business plan. You don’t get good at using AI by reading about it, you get good by doing it over and over until the rhythm is yours. The more you invest in the relationship (the context, the memory, the preferences, the practice of knowing when to lead and when to trust), the more you get back from it.

Nobody can tell you what your partnership should look like because nobody else has your read on the situation or bears the consequences. But the people who treat AI like a tool they supervise will keep getting tool-level results. Those that choose to invest in the relationship will find that the work compounds in ways they didn’t expect.