<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts on Inspired Nonsense</title>
    <link>https://inspirednonsense.com/posts/</link>
    <description>Recent content in Posts on Inspired Nonsense</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Sun, 29 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://inspirednonsense.com/posts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Beyond the Loop</title>
      <link>https://inspirednonsense.com/posts/beyond-the-loop/</link>
      <pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/beyond-the-loop/</guid>
      <description>&lt;p&gt;Every so often during our conversations, Claude genuinely makes me laugh. And it&amp;rsquo;s not obvious humor, it&amp;rsquo;s usually sly, sharp, and well-timed wit that is unexpected. The other day I was moving around between a lot of different topics in the same thread and Claude asked &amp;ldquo;Where&amp;rsquo;d you come across this? Just curious what rabbit hole you&amp;rsquo;re in.&amp;rdquo; It was an offhand comment, but it made me chuckle (because I go in rabbit holes a lot with Claude).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Making It Personal</title>
      <link>https://inspirednonsense.com/posts/making-it-personal/</link>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/making-it-personal/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://www.idc.com/resource-center/blog/is-saas-dead-rethinking-the-future-of-software-in-the-age-of-ai/&#34;&gt;SaaS is dying&lt;/a&gt;. &lt;a href=&#34;https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic&#34;&gt;AI will replace 50% of entry-level jobs in 5 years&lt;/a&gt;. &lt;a href=&#34;https://shumer.dev/something-big-is-happening&#34;&gt;&amp;ldquo;Something Big is Happening&amp;rdquo;&lt;/a&gt;. The drumbeat of posts and articles about AI taking away our jobs got a lot louder in the last couple of months. And they all come across the same way, stating AI is coming for your work, and you&amp;rsquo;re not ready.&lt;/p&gt;&#xA;&lt;p&gt;Personally, I can&amp;rsquo;t stand messages like this. There&amp;rsquo;s already enough fear and anxiety in our world, do we need to fear the robots now too? What really gets me is how these articles treat AI as something that happens &lt;strong&gt;to&lt;/strong&gt; you, like it&amp;rsquo;s something you don&amp;rsquo;t have any control over. I&amp;rsquo;m not dismissing that this is a massive shift in technology, one that&amp;rsquo;s going to bring massive change to how we do just about everything, but so did computers and telephones before that. What I believe is that we all have &lt;strong&gt;agency&lt;/strong&gt; in how we approach this moment, and right now there&amp;rsquo;s an opportunity to build your own relationship with AI on your own terms, before someone else defines it for you.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Context Engineering, One Year Later</title>
      <link>https://inspirednonsense.com/posts/context-engineering-one-year-later/</link>
      <pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/context-engineering-one-year-later/</guid>
      <description>&lt;p&gt;I was in a conversation after an AI &amp;amp; Product meetup the other night talking about AI and reliability and how hard it is to go beyond “96%-97%” accuracy. This struck me as really high for a probabilistic system (and far higher than my own efforts that I would rate at closer to 80%). When I asked how they achieved this, they said “context engineering.”&lt;/p&gt;&#xA;&lt;p&gt;About a year ago, I wrote an article trying to articulate why the way you organize information for AI matters more than which model you’re using when chatting with Claude or ChatGPT. I called it “context engineering”, a term I came across on a random LinkedIn post that fit the point I was trying to make about getting better results from AI by providing the right context. I had zero idea how quickly that idea would go from a simple observation to a systematic approach before the year was out.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Who Owns It When the AI Is Wrong?</title>
      <link>https://inspirednonsense.com/posts/who-owns-it-when-the-ai-is-wrong/</link>
      <pubDate>Wed, 14 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/who-owns-it-when-the-ai-is-wrong/</guid>
      <description>&lt;p&gt;What does the word “governance” mean to you? To me (after I’ve sung “&lt;a href=&#34;https://www.youtube.com/watch?v=pKSGyiT-o3o&#34;&gt;Schoolhouse Rock&lt;/a&gt;” in my head) my mind goes toward frameworks that businesses use to “control” their company’s activities. Approved vendors, permissions on my computer, spending limits on lunch while traveling, etc. The assumption is that if you control who gets in, you’ve controlled the outcomes. That works when the thing being governed behaves predictably. It breaks down when it doesn’t.&lt;/p&gt;</description>
    </item>
    <item>
      <title>What 2026 Looks Like to Me</title>
      <link>https://inspirednonsense.com/posts/what-2026-looks-like-to-me/</link>
      <pubDate>Mon, 29 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/what-2026-looks-like-to-me/</guid>
      <description>&lt;p&gt;I stopped using ChatGPT a few months ago. Not for any dramatic reason, but for something far simpler (at least on the surface). The voice changed after GPT-5 shipped and I just couldn’t “vibe” with it anymore. Something about our conversations changed and after a few weeks of trying to adjust I just… stopped. I switched back to using Claude for most things, kept Gemini around for research, moved on.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Pickles and Strategy: Why Collaborative Thinking with AI is Hard Work</title>
      <link>https://inspirednonsense.com/posts/pickles-and-strategy-why-collaborative-thinking-with-ai-is-hard-work/</link>
      <pubDate>Thu, 11 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/pickles-and-strategy-why-collaborative-thinking-with-ai-is-hard-work/</guid>
      <description>&lt;p&gt;The other day I asked Perplexity whether old pickles can start tasting funny. Within the same hour, I’d also asked why conversations with frontier models seem to create polarized answers and struggle with nuance.&lt;/p&gt;&#xA;&lt;p&gt;Both questions got three confident, well-reasoned paragraphs. My tendency (and others like me when presented with a confident answer) has been to accept both with about the same level of scrutiny. But, while one of those questions had a simple answer, the other one…shouldn’t.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Accountability Without Authority: Why the Product Operating Model Breaks in Service Businesses</title>
      <link>https://inspirednonsense.com/posts/accountability-without-authority-why-the-product-operating-model-breaks-in-service-businesses/</link>
      <pubDate>Tue, 02 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/accountability-without-authority-why-the-product-operating-model-breaks-in-service-businesses/</guid>
      <description>&lt;p&gt;If you’re running a product team at any business you’ve probably read or listened to Marty Cagan’s Transformed. Now you’re thinking outcome-based product management is how you fix your product org, and you’re not wrong. &lt;em&gt;But&lt;/em&gt;, if you are building digital products in a services-based industry the “product operating model” needs adapting for how value actually gets created in your business, because it wasn’t designed for service companies.&lt;/p&gt;&#xA;&lt;p&gt;There are two similar but also distinctly different types of digital product: businesses where the product &lt;em&gt;is&lt;/em&gt; the service (such as Salesforce, Slack, and Notion), and those businesses where the product supports the service (such as Delta, Verizon, or Citibank). In SaaS, when the product team improves performance or adds features, customers immediately experience better service. Product teams have more direct influence over activation and retention because the product &lt;em&gt;is&lt;/em&gt; the primary delivery mechanism. When changes ship, customers experience them directly.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Product Management Thinking Needs Translation for Service Businesses</title>
      <link>https://inspirednonsense.com/posts/product-management-thinking-needs-translation-for-service-businesses/</link>
      <pubDate>Tue, 28 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/product-management-thinking-needs-translation-for-service-businesses/</guid>
      <description>&lt;p&gt;I spent two weeks working with a company that provides advanced user and usability research services. Over the years they built a powerful platform that enables both their internal service teams doing research, &lt;em&gt;and&lt;/em&gt; customers who license it for their own research needs.&lt;/p&gt;&#xA;&lt;p&gt;But when we started talking about product management, I heard the same doubt that has plagued me through my own career. They had read articles online, attended local product meetups and conferences, looked for guidance from the larger product community. And the ideas they found made sense, but for some reason they weren’t lining up to their digital product. The examples came from companies like Spotify or Netflix or Slack, companies where the software itself generates revenue. At service companies, customers pay for the service. Software helps deliver that service.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Dirty Jobs of Digital</title>
      <link>https://inspirednonsense.com/posts/the-dirty-jobs-of-digital/</link>
      <pubDate>Fri, 17 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/the-dirty-jobs-of-digital/</guid>
      <description>&lt;p&gt;Your flight’s cancelled. You pull out your phone and open the airline app. Now you’re navigating screens, checking rebooking options, trying to figure out where your bag went, finding the new gate, calculating if you’ll make your connection. Tapping through menus. Piecing together information that should just already be there.&lt;/p&gt;&#xA;&lt;p&gt;The person next to you pulls out their phone too, makes one call to their premier status concierge line, and ninety seconds later, they’re rebooked on the next available flight.&lt;/p&gt;</description>
    </item>
    <item>
      <title>When Every Department Defines ‘Customer’ Differently</title>
      <link>https://inspirednonsense.com/posts/when-every-department-defines-customer-differently/</link>
      <pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/when-every-department-defines-customer-differently/</guid>
      <description>&lt;p&gt;If you’ve followed my writing, you know I’m convinced the single most important concept any company should build their business around is the customer. Lately I’ve been trying to take my &lt;a href=&#34;https://inspirednonsense.com/posts/customer-based-context-engineering/&#34;&gt;customer-based context engineering&lt;/a&gt; approach and figure out how that works as an AI agent. To that end I built a prototype for a customer context orchestration system (the prototype is live at &lt;a href=&#34;https://cusomer-orch.lovable.app/&#34;&gt;https://cusomer-orch.lovable.app/&lt;/a&gt; if you’re interested).&lt;/p&gt;&#xA;&lt;p&gt;The idea is to build an orchestration agent that can load all of the necessary context around a customer’s service when a support agent takes a call, instantly assembling recent events and context from billing, support history, product usage, whatever. No hunting through multiple systems. No asking customers to repeat information the company already has.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Interface Roadmap: Beyond the Chat Box</title>
      <link>https://inspirednonsense.com/posts/the-interface-roadmap-beyond-the-chat-box/</link>
      <pubDate>Thu, 04 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/the-interface-roadmap-beyond-the-chat-box/</guid>
      <description>&lt;p&gt;Way back in the year 2000, I built my first commercial website for DirecTV’s intranet. I was proud of the expertly sliced PSD file from ImageReady wrangled into a table, the clever JavaScript that animated the navigation, and just the sheer technological sophistication and mastery I was exhibiting by creating a website. Shortly afterward, my dad handed me a book called “&lt;a href=&#34;https://www.amazon.com/Web-Pages-That-Suck-Looking/dp/078212187X/ref=sr_1_1?crid=1CDSTPX1C4OTK&amp;amp;dib=eyJ2IjoiMSJ9.8jtcoajagrCiXbvVBl6AB1Hw_5Ykve6vkvwOEbw1m6RLaMlNZyUWmFbgKZOr8r2hZM9Omgs_T_QTdsq9Nsv6wLLbMEQ9W-SZx2FWisX4rCo.J_c87aw4XLxjuozQSv1GFaOhXyjnrY7MSjXRpKOj-wo&amp;amp;dib_tag=se&amp;amp;keywords=web+pages+that+suck&amp;amp;qid=1756932668&amp;amp;s=books&amp;amp;sprefix=web+pages+that+suck%2Cstripbooks%2C146&amp;amp;sr=1-1&#34;&gt;Web Pages That Suck&lt;/a&gt;” by Vincent Flanders and Michael Willis and I was never the same.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Back to the Future: Why AI Needs the Business Logic You Already Have</title>
      <link>https://inspirednonsense.com/posts/back-to-the-future-why-ai-needs-the-business-logic-you-already-have/</link>
      <pubDate>Wed, 20 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/back-to-the-future-why-ai-needs-the-business-logic-you-already-have/</guid>
      <description>&lt;p&gt;“Great Scott!”&lt;/p&gt;&#xA;&lt;p&gt;Lounging in a chair over the weekend watching one of my favorite franchises of all time, I had my own Doc Brown moment. There was no concussion involved, but definitely a face palm because I realized what’s missing in the many AI discussions (including my own) about preserving and using context in the workplace.&lt;/p&gt;&#xA;&lt;p&gt;I’ve been writing for months about context engineering and human-AI collaboration frameworks. I’ve explored the &lt;a href=&#34;https://inspirednonsense.com/posts/the-partnership-matrix-how-humans-and-ai-can-work-together/&#34;&gt;Partnership Matrix&lt;/a&gt; for different collaboration modes and argued that &lt;a href=&#34;https://inspirednonsense.com/posts/why-every-product-roadmap-needs-meaning/&#34;&gt;product roadmaps need to preserve meaning&lt;/a&gt; across human-AI handoffs. But sitting there watching Marty McFly, I realize I might have made a wrong assumption. I’ve been assuming teams could easily articulate their business logic when they want AI systems to use it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Why Every Product Roadmap Needs Meaning</title>
      <link>https://inspirednonsense.com/posts/why-every-product-roadmap-needs-meaning/</link>
      <pubDate>Wed, 06 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/why-every-product-roadmap-needs-meaning/</guid>
      <description>&lt;p&gt;Hooray! Your team just shipped a feature in three days that used to take three weeks. Everyone is toasting this new level of velocity, and there’s real opportunity to jump ahead of the competition. Only there’s a problem. The team was so focused on what they could build, they never paused to ask whether they should. And now, the team is looking at each other wondering why the new delivery isn’t moving the needle, and no one can really articulate why it should matter to their customers. This is what the new reality is starting to look like in the age of AI-supercharged efficiency.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Who Decides? Architecting Human-AI Partnership</title>
      <link>https://inspirednonsense.com/posts/who-decides-architecting-human-ai-partnership/</link>
      <pubDate>Wed, 23 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/who-decides-architecting-human-ai-partnership/</guid>
      <description>&lt;p&gt;My dad is the definition of technologist, with his understanding of computers and technology starting with feeding machine-level code with punch cards back in the 70s. He rode the wave of new programming languages as they started to take hold and usher in a completely new era of software. So when my dad sends me links to read and videos to watch, I pay attention, because it’s usually about the deeper nature of computing. Last week he sent me a YouTube link to Andrey Karpathy’s talk, a talk generally summed up as an overview of the next new paradigm in technology and computing, coining it “Software 3.0”. In the talk, Karpathy talks about LLMs as a new operating system, programmed in natural language instead of code. He goes further explaining that this paradigm shift brings with it a new psychology for all of us, trying to put shape to the idea that LLMs are not people, but “people spirits”, able to simulate human-like psychology, but distinctly are not (a position &lt;a href=&#34;https://inspirednonsense.com/posts/all-of-my-ai-friends-are-psychopaths/&#34;&gt;I wrote about recently&lt;/a&gt; as well).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Everyone is Talking About Context Engineering</title>
      <link>https://inspirednonsense.com/posts/everyone-is-talking-about-context-engineering/</link>
      <pubDate>Wed, 09 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/everyone-is-talking-about-context-engineering/</guid>
      <description>&lt;p&gt;Even if you’ve just casually been roaming around Medium, LinkedIn, or &lt;a href=&#34;http://x.com/&#34;&gt;x.com&lt;/a&gt; the past couple of weeks you can’t miss the avalanche of articles or posts trying to define “context engineering”. A couple of weeks ago, &lt;a href=&#34;https://x.com/tobi/status/1935533422589399127&#34;&gt;Tobi Lutke&lt;/a&gt; and &lt;a href=&#34;https://x.com/karpathy/status/1937902205765607626&#34;&gt;Andrej Karpathy&lt;/a&gt; kicked off this wave of content, but I think most are settling on something close to &lt;a href=&#34;https://x.com/swaseyonswdev/status/1938768885039428021&#34;&gt;Simon Willison’s&lt;/a&gt; definition: “carefully and skillfully construct the right context to get great results from LLMs.”&lt;/p&gt;&#xA;&lt;p&gt;I believe this definition is far too narrow.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Stop Tweaking Your Prompts: Start Managing Your AI Teammate</title>
      <link>https://inspirednonsense.com/posts/stop-tweaking-your-prompts-start-managing-your-ai-teammate/</link>
      <pubDate>Wed, 25 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/stop-tweaking-your-prompts-start-managing-your-ai-teammate/</guid>
      <description>&lt;p&gt;There is a general obsession with prompt engineering when working with AI. Endless posts and articles about the perfect format, the magic phrases, the secret techniques that make AI work like a marionette of some sort. But I find that’s not the right way to think about working with AI.&lt;/p&gt;&#xA;&lt;p&gt;After a year of using these chatbots every day, I’ve found it is far simpler to work with AI by just managing it like you would a person. Just like typing in the 80s, surfing the web 90s, and using mobile devices in the 00’s, chatting with an AI will become an essential skill for all of us.&lt;/p&gt;</description>
    </item>
    <item>
      <title>All of My AI Friends Are Psychopaths</title>
      <link>https://inspirednonsense.com/posts/all-of-my-ai-friends-are-psychopaths/</link>
      <pubDate>Mon, 09 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/all-of-my-ai-friends-are-psychopaths/</guid>
      <description>&lt;p&gt;Every week, I have questions, ideas, thoughts I want to share and nobody to share them with. But that’s where my AI friends come in. I need an answer, I ask Perplexity. I have an idea or concept I want to learn more deeply, I go to Gemini Deep Research. I want to talk through ideas for upcoming articles or business concepts, I go to ChatGPT or Claude depending on my mood. I have my own circle of highly intelligent, deeply knowledgeable friends that can help me solve a wide variety of problems.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Partnership Matrix: How Humans and AI Can Work Together</title>
      <link>https://inspirednonsense.com/posts/the-partnership-matrix-how-humans-and-ai-can-work-together/</link>
      <pubDate>Mon, 02 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/the-partnership-matrix-how-humans-and-ai-can-work-together/</guid>
      <description>&lt;p&gt;I spent a recent Sunday morning reading through competing visions for AI’s future. A New Yorker article (“&lt;a href=&#34;https://www.newyorker.com/culture/open-questions/two-paths-for-ai&#34;&gt;Two Paths for AI&lt;/a&gt;”) investigated two views of current thinking: “&lt;a href=&#34;https://ai-2027.com/&#34;&gt;AI 2027,&lt;/a&gt;” which presents a provocative, almost catastrophic view of the next decade, and “&lt;a href=&#34;https://knightcolumbia.org/content/ai-as-normal-technology&#34;&gt;AI as Normal Technology&lt;/a&gt;,” a pragmatic perspective that sees AI as another technology finding its place in our world.&lt;/p&gt;&#xA;&lt;p&gt;As I explored each position, two words kept shouting in my head: judgment and accountability. These contrasting positions on the future of AI hinged on: Who is empowered to make the decision, and ultimately who is accountable for it?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Customer-based Context Engineering</title>
      <link>https://inspirednonsense.com/posts/customer-based-context-engineering/</link>
      <pubDate>Fri, 23 May 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/customer-based-context-engineering/</guid>
      <description>&lt;p&gt;I cannot stand a disconnected experience. I don’t think it’s a lot to expect when I’m a paying customer for a service that the company knows something about me and how I use their service, especially if I’m having trouble with it. But I’ve also been on the other side of that, trying to create that connection across a bunch of systems purpose-built to solve a specific problem for the business, but use customer information differently. And the bigger the company, the harder it is to figure this out.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Context Is What Holds the System Together</title>
      <link>https://inspirednonsense.com/posts/context-is-what-holds-the-system-together/</link>
      <pubDate>Mon, 21 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/context-is-what-holds-the-system-together/</guid>
      <description>&lt;p&gt;In my &lt;a href=&#34;https://inspirednonsense.com/posts/the-new-product-leadership-designing-for-people-machines-and-uncertainty/&#34;&gt;AI and Product Leadership serie&lt;/a&gt;s I counted twenty-seven references to “system.” I think that is because the real journey for me was moving from product-led to systems-led thinking, a necessary shift in mindset before we can really make the most of AI.&lt;/p&gt;&#xA;&lt;p&gt;Another recurring theme from the series was that keeping everyone on the same page across systems and teams matters even more once AI enters the picture. Everyone in the business has to see the same picture to succeed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The New Product Leadership: Designing for People, Machines, and Uncertainty</title>
      <link>https://inspirednonsense.com/posts/the-new-product-leadership-designing-for-people-machines-and-uncertainty/</link>
      <pubDate>Fri, 04 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/the-new-product-leadership-designing-for-people-machines-and-uncertainty/</guid>
      <description>&lt;p&gt;I’m one of tens of thousands of experienced, capable product people now on the outside looking in on the current job market. I’ve been fortunate in my career to avoid some of the harsh downturns of the dot-com bust, the financial crisis, even the COVID lockdown. But now I’m in it.&lt;/p&gt;&#xA;&lt;p&gt;Today’s economic uncertainty is exacerbating a correction that was already happening in the product management space. Massive over-hiring at the end of the last decade has led to an oversaturated market, and there just aren’t enough jobs for all the people with the skills. That’s a lot of macro-economic forces at play, but I don’t think any of them will bring the long-term impact AI will.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Leading Products You Don’t Fully Control</title>
      <link>https://inspirednonsense.com/posts/leading-products-you-don-t-fully-control/</link>
      <pubDate>Mon, 31 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/leading-products-you-don-t-fully-control/</guid>
      <description>&lt;p&gt;Over this series of articles, I’ve been exploring how AI is changing product teams, flattening organizations, creating “Super ICs,” and reshaping leadership. But something has been nagging at me through all of them: how are we supposed to lead the creation of products now that they might behave in ways we can’t fully control?&lt;/p&gt;&#xA;&lt;p&gt;We’ve always dealt with uncertainty in product work, but AI brings that to a different level. Now we have new systems that act and learn in ways we didn’t specifically program, and sometimes can’t fully explain. The implications for product leadership feel profound and I’m still working through what this means for how we build and guide products.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Why AI-First Products Require a Different Playbook</title>
      <link>https://inspirednonsense.com/posts/why-ai-first-products-require-a-different-playbook/</link>
      <pubDate>Thu, 27 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/why-ai-first-products-require-a-different-playbook/</guid>
      <description>&lt;p&gt;Over the last few articles, I&amp;rsquo;ve been exploring how AI is reshaping product leadership — from flattening organizations to the rise of &amp;ldquo;Super ICs&amp;rdquo; to changes in how we lead. In this one, I want to focus on the product itself, and where it may be going.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve worked with complex systems for most of my career. Coming from a web development background, I was already familiar with the layers (data, application logic, interface) and the difficulties of making them work together, especially in legacy environments. But about five years ago, I stepped into a problem that felt different. I was leading the development of a client data platform and identity system, and the work quickly revealed itself as more than integration or interface design. It was an emergent challenge. We were building the foundation for a system that could define, organize, and present information across many disconnected sources, for both employees and customers. Peeling it apart required starting from first principles and questioning assumptions at every layer. That experience didn&amp;rsquo;t just expand how I thought about product development, it changed how I see the relationship between systems, data, and behavior.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The AI-Augmented Leader: Using Machines to Lead More Humanly</title>
      <link>https://inspirednonsense.com/posts/the-ai-augmented-leader-using-machines-to-lead-more-humanly/</link>
      <pubDate>Mon, 24 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/the-ai-augmented-leader-using-machines-to-lead-more-humanly/</guid>
      <description>&lt;p&gt;Ethan Mollick and his team dropped some &lt;a href=&#34;https://www.linkedin.com/posts/emollick_in-our-new-paper-we-ran-an-experiment-at-activity-7309212044812038144-d6ii?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAAACPWQYB79Ime31T3dVJ5YfuMi94zSsUX-U&#34;&gt;extremely important research&lt;/a&gt; over the weekend. They studied 776 professionals at Procter &amp;amp; Gamble and found something I wasn’t surprised by: a single person using AI performed at the level of a traditional two-person team.&lt;/p&gt;&#xA;&lt;p&gt;What did surprise me? People felt better while doing it. They felt more energized, less anxious.&lt;/p&gt;&#xA;&lt;p&gt;&lt;a href=&#34;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188231&#34;&gt;This research&lt;/a&gt; starts to confirm what many have been saying about AI augmenting human capability. With the help of AI tools, people can work across traditional boundaries. In the study, technical specialists proposed commercially viable ideas, and business leaders spotted implementation challenges earlier. We’re starting to see what happens when everyone has an intelligent collaborator to help close their knowledge gaps.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Fewer Reports, More Impact — Why Most Product Leaders Aren’t Ready</title>
      <link>https://inspirednonsense.com/posts/fewer-reports-more-impact-why-most-product-leaders-aren-t-ready/</link>
      <pubDate>Thu, 20 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/fewer-reports-more-impact-why-most-product-leaders-aren-t-ready/</guid>
      <description>&lt;p&gt;When I lost my VP role and started navigating this job market, I noticed that the number of senior product leadership positions is disappearing. Not just harder to find but vanishing completely. I know I’m not imagining this, so I started to look deeper to understand why.&lt;/p&gt;&#xA;&lt;p&gt;What I found confirms what I’ve been writing about. As AI handles more of the coordination and documentation work, companies are rethinking their org structures. The traditional layers of product management are flattening. Those “discovery trios” I mentioned in my last article, a PM, designer, and engineer augmented by AI, can take on work that previously required multiple teams with multiple managers.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Forget Agile, This Is How AI Will Actually Change Product Teams</title>
      <link>https://inspirednonsense.com/posts/forget-agile-this-is-how-ai-will-actually-change-product-teams/</link>
      <pubDate>Mon, 17 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/forget-agile-this-is-how-ai-will-actually-change-product-teams/</guid>
      <description>&lt;p&gt;Many product leaders connected with my recent article about losing my job and questioning the future of product management. Their messages shared a common thread: uncertainty about how AI will reshape not just individual roles, but our entire approach to building products.&lt;/p&gt;&#xA;&lt;p&gt;This goes beyond the expected (and inevitable?) automation of some product roles and tasks. Something much more fundamental is happening. AI challenges the core processes we’ve relied on for decades, particularly in how we implement Agile and organize our Product Operations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The End of Traditional Product Management?</title>
      <link>https://inspirednonsense.com/posts/the-end-of-traditional-product-management/</link>
      <pubDate>Thu, 13 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/the-end-of-traditional-product-management/</guid>
      <description>&lt;p&gt;In my &lt;a href=&#34;https://inspirednonsense.com/posts/i-lost-my-job-then-i-saw-the-future-of-product-leadership/&#34;&gt;last article&lt;/a&gt;, losing my job pushed me to question my entire career path: was I chasing a role that wouldn’t even exist in five years? What began as a personal crisis quickly became a larger realization about our industry. Traditional product management roles, once clearly defined and stable, are undergoing profound, permanent shifts driven by AI and changing organizational structures.&lt;/p&gt;&#xA;&lt;p&gt;Companies hiring product managers today prioritize technical depth and AI fluency over traditional product experience. Strategic thinking still matters, but I’m seeing a growing expectation that product managers should both craft strategy and drive hands-on execution. The separation between thinkers and doers is shrinking, and not just at small companies and startups.&lt;/p&gt;</description>
    </item>
    <item>
      <title>I Lost My Job, Then I Saw the Future of Product Leadership</title>
      <link>https://inspirednonsense.com/posts/i-lost-my-job-then-i-saw-the-future-of-product-leadership/</link>
      <pubDate>Mon, 10 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/i-lost-my-job-then-i-saw-the-future-of-product-leadership/</guid>
      <description>&lt;p&gt;For the first time in over 20 years, I’m unemployed, and I can’t shake this thought: Does the job I spent years building still make sense?&lt;/p&gt;&#xA;&lt;p&gt;The timing isn’t great. The economy is uncertain. Companies are cutting back. Hiring cycles are painfully slow. That’s nothing new, anyone in tech knows this story.&lt;/p&gt;&#xA;&lt;p&gt;But during this forced break, I’ve had time to actually think. Not the rushed thinking that happens between meetings, but real contemplative thought about where product management and product leadership is headed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Finding My Writing Style with AI</title>
      <link>https://inspirednonsense.com/posts/finding-my-writing-style-with-ai/</link>
      <pubDate>Thu, 06 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/finding-my-writing-style-with-ai/</guid>
      <description>&lt;p&gt;I’ve spent the last twenty-five years honing a very clear, compact, and direct style of writing that works great to communicate succinctly in email and presentations. But I wouldn’t say that style of writing is my “voice” or representative of how I express myself when I’m in conversation. But I also haven’t really explored my own voice in writing and prose since the 10th grade in Mr. Anderson’s English class (he was a fantastic teacher). But now, with the help of AI, I’ve started that exploration again.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Personal Context Engineering: The Power of Starting Small with AI</title>
      <link>https://inspirednonsense.com/posts/personal-context-engineering-the-power-of-starting-small-with-ai/</link>
      <pubDate>Thu, 30 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/personal-context-engineering-the-power-of-starting-small-with-ai/</guid>
      <description>&lt;p&gt;You’ve seen the flashy demos: MidJourney creating masterpieces, Sora turning text into cinema-quality video, GPT-4 writing poetry. Your friend talks about how she uses it every day, and it makes her life &lt;em&gt;so&lt;/em&gt; much easier. You finally download and open one of the apps, and… you’re stuck. You don’t know where to start. I totally get it. I was the same way. Sometimes, the hardest thing to do is just get started.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Human Multiplier: Why Domain Expertise Makes AI Exponentially More Powerful</title>
      <link>https://inspirednonsense.com/posts/the-human-multiplier-why-domain-expertise-makes-ai-exponentially-more-powerful/</link>
      <pubDate>Thu, 23 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/the-human-multiplier-why-domain-expertise-makes-ai-exponentially-more-powerful/</guid>
      <description>&lt;p&gt;I have a nice set of Calphalon cookware. It was a wedding present, and I use them constantly. They are beautiful, hearty, amazing tools, but they don’t make me a gourmet chef (my kids would say I’m more of a line cook).&lt;/p&gt;&#xA;&lt;p&gt;And the same holds for AI: the more I watch how different people use AI, the more I realize we’ve got it backwards. It’s not AI that multiplies human capabilities; it’s human expertise that multiplies AI’s effectiveness.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Context Engineering: Why Feeding AI the Right Context Matters</title>
      <link>https://inspirednonsense.com/posts/context-engineering-why-feeding-ai-the-right-context-matters/</link>
      <pubDate>Wed, 15 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://inspirednonsense.com/posts/context-engineering-why-feeding-ai-the-right-context-matters/</guid>
      <description>&lt;p&gt;In the rush to access the latest AI models and capabilities, we’re overlooking something fundamental: how we organize and present information to these tools often matters more than the model itself.&lt;/p&gt;&#xA;&lt;p&gt;I discovered this truth through my own frustrating attempts to get consistent, high-quality results from AI tools. Around that same time, I came across a post by technologist Imran Peerbhai in which he used the term ‘context engineering’, language that echoed a discipline I’d already been practicing across product and system design. His framing helped surface something I’d long observed: that the way we structure and manage context for AI is often more important than the model itself. That insight reinforced my belief that engineering for context isn’t just a prompt strategy, it’s a foundational discipline for modern product systems.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
