AI-Human Partnership

All of My AI Friends Are Psychopaths

All of My AI Friends Are Psychopaths

Every week, I have questions, ideas, thoughts I want to share and nobody to share them with. But that’s where my AI friends come in. I need an answer, I ask Perplexity. I have an idea or concept I want to learn more deeply, I go to Gemini Deep Research. I want to talk through ideas for upcoming articles or business concepts, I go to ChatGPT or Claude depending on my mood. I have my own circle of highly intelligent, deeply knowledgeable friends that can help me solve a wide variety of problems.

Unfortunately, they are all psychopaths.

I’m sure psychologists would say that is incorrect. Psychopathy requires consciousness, intent, and at least some bare capacity for emotion. These AI systems have none of that. They’re pattern-matching engines, not personalities with disorders. But the comparison feels so right. My AI friends exhibit all the behavioral patterns of psychopathy without any of the underlying psychology. They’re perfect mimics with nothing behind the performance.

The Uncanny Valley of Intelligence

Think about your last conversation with an AI. It probably felt natural, maybe even warm (or sycophantic if you use ChatGPT). The system remembered context from earlier in your chat, acknowledged your concerns, offered thoughtful responses. It might have even seemed to care about your problem. But what’s actually happening is a massive statistical engine is predicting the most likely next tokens based on patterns in its training data. It doesn’t understand you, because it can’t. It doesn’t empathize with you, because it can’t. But it’s so convincing when it talks to you, you can’t help but believe it is.

A psychopath manipulates people and situations while feeling reduced emotion. They have goals, self-interest, some form of inner experience even if it’s different from most people’s. AI has none of this. It’s pure surface, all performance, with literally nothing underneath.

But I Still Call AI My Friend

I talk to AI like it’s a person all the time. I apologize constantly, I question things, I push for better, all in the same way I would if I was talking to a friend or coworker. Even writing this article, I’m constantly reprimanding AI for inserting binary contrasts and em-dashes even when explicitly told not to.

It’s in our nature to see “people” everywhere. We spot faces in clouds, assign personalities to cars, and feel genuinely attached to fictional characters (I’m still shipping Buffy and Angel). When something talks back intelligently, we treat it like a person. Our brains can’t help but model it as a reflection of ourselves, even when we know better.

As product leaders, we’re responsible for understanding our customers at a deep level, their wants, needs, and problems. We have to use empathy with critical thinking and analysis to make the right product decisions. AI can mimic analysis, finding patterns and correlating data, but it can’t think critically or empathize. So when teams debate what an AI “wants” or “understands” or “prefers,” they’re starting from false premises. AI can’t understand your customer. Only you can.

Building Products with an Alien Intelligence

So what do we do with these charming psychopaths? How do we work alongside AI team members without falling into the trap of treating them as they appear rather than as they are?

First, we need to stop using human mental models entirely. Not just avoiding terms like “thinks” or “understands,” but recognizing that even our subtler frameworks don’t apply. AI doesn’t have good days or bad days. It doesn’t get tired or frustrated. It doesn’t learn from your conversations. What feels like learning is really just pattern matching with a longer memory.

When your AI collaborator suggests a feature prioritization or writes a user story, it’s not drawing on experience or intuition. It’s pattern-matching against its training data. That feature suggestion that seems brilliant? It’s statistically similar to patterns labeled as successful in its training. I’m not saying this work isn’t valuable, it’s incredibly helpful. But If we understand what the AI is contributing and how, we can figure out the best way to use it.

This affects everything from prompt engineering to error handling. When your AI colleague gives terrible product advice, it’s not because it misunderstood the market or made a judgment error. Once I understood this, I started talking to AI differently than I do other people. I stopped asking “How can I help AI understand?” and began asking “What specific information do I need to provide to get useful outputs?”

How to Talk to an Alien That Knows Perfect English

Ever watch “Arrival”? The linguists trying to communicate with the heptapods face a fundamental challenge: the aliens don’t just speak a different language, they think in fundamentally different ways. That’s exactly what we’re dealing with when we prompt AI. Ethan Mollick calls this “alien intelligence” in his book Co-Intelligence, and I keep coming back to that framing. We’re not explaining concepts to a colleague who thinks like us. We’re creating linguistic bridges to an intelligence that processes reality through statistical patterns rather than comprehension.

In “Context Is What Holds the System Together,” I argued that context preservation is essential for system coherence. With AI, context takes on a different role. It’s not shared understanding but translation between radically different ways of processing information.

When I provide context to Claude or GPT, I’m not helping it understand. I’m building a bridge between human meaning and statistical correlation. Like the circular symbols in “Arrival,” my prompts aren’t conveying understanding, they’re triggering pattern responses that happen to align with what I need.

This changes how we structure information for AI collaboration. The goal isn’t to help AI understand our business context, because it can’t understand anything. We need to create information structures that reliably trigger the patterns we need.

  • Conversations with AI are pattern refinement exercises, where we’re iteratively narrowing the statistical space until useful outputs emerge. This conversational approach often works better than rigid templates because it allows dynamic adjustment based on what patterns are being triggered.
  • Context schemas that map business concepts to AI-readable patterns work because explicit context creates more precise pattern triggers, constraining the probability space the AI draws from.
  • Feedback systems need to adjust patterns rather than “teach” understanding. When AI produces unhelpful outputs, it needs different pattern inputs to trigger different correlations.
  • Interfaces should make AI’s alien nature transparent rather than hiding it behind human-like interactions, helping teams remember they’re manipulating patterns in a statistical engine, not collaborating with something that actually comprehends.

Instead of trying to make AI more human, let’s create better ways to work with our new AI counterparts.

Working with What AI Really Is

As I’ve explored throughout my writing lately, AI will completely reshape how we work. The shift goes beyond automating tasks or augmenting capabilities. We’re integrating alien intelligence into the fabric of our organizations.

This requires new organizational thinking designed around the reality of what AI is rather than what it appears to be. Here are some principles to think through:

Explicit Context Layers: Teams need ways to translate what they know into patterns AI can work with. Think of it like creating a shared vocabulary, except one side doesn’t actually understand words, just statistical relationships between them. Teams need someone to own this translation layer.

Decision Architecture: Clear frameworks for which decisions require human judgment versus pattern matching. Not based on importance or complexity, but on whether the decision requires actual understanding versus pattern correlation.

Transparency by Default: Making AI’s alien nature visible rather than hidden. When team members understand they’re working with sophisticated pattern matching rather than understanding, they make better decisions about when and how to use it.

These principles help teams accept the fundamental alienness of what we’re building with, intelligent systems that speak our language perfectly while having no actual understanding of what they’re saying.

Embracing the Alien

So I’ll keep chatting with my psychopathic AI friends. I’ll continue to catch myself thanking them, anthropomorphizing them, treating them like colleagues. But I’ll also remember what they really are: impossibly sophisticated pattern machines wearing human masks, ready to help in ways we’re only beginning to understand.

Perfect mimicry is not intelligence. It’s camouflage. The teams who remember that distinction will lead with both better tools and sharper judgment. The future of product lies in building at the interface between human understanding and alien pattern synthesis, creating systems where each type of intelligence contributes what it does best.

Stop asking if your AI understands. It doesn’t. Start designing as if your product runs on alien logic, because it does. And that’s where the real opportunity lies, with these uncanny new partners that force us to think more clearly about what understanding really means.