Monday, October 13, 2025

Claude Code Micro Review


I’ve recently come across product managers (PMs) who are using Claude Code for PM tasks like document review.  This interested me because I assumed that Claud Code was optimized for, well, code.


It turns out that despite the name, Claude Code is just the CLI version of Claude for desktop.  It works a little differently than the desktop version and has some nifty features that the desktop version doesn’t have. It turns out that it’s pretty darn useful for doing PM work.  I find this fascinating because you would think that Anthropic would add those features to the desktop product.  Perhaps they’re working on it?


There are two key features that really interest me in Claude Code.  One is the /init feature.  As I’ve discussed before, context is king when it comes to AI.  Claude Code has a very nice feature that I’m shocked to find missing from most AI assistants.  When you use Claude Code and issue the /init command, what happens is that it scans the directory you’re in and creates a Claude.md file:



The Claude.md file is basically instructions for what this directory is and what you want Claude to do.  This way, you can set rules for Claude.  The original intent was that you would use Git to grab a repo, have Claude Code scan the repo and then set rules like “always use React” or “conform to coding best practice Y” or some such.  However, since the underlying Claude functionality is all there inside of Claude Code, you can use it for pretty much anything you like.


So, you could download all of your customer interview transcripts into a directory, have Claude Code read them and then ask questions about them.  Very cool functionality for a PM.  Since it sits on the Claude platform you can also use MCP to talk to external sources like Jira:



In this case, I connected it to Jira and asked it to review my current backlog and to create a new sprint focusing on the highest priority items.  I gave it some basic requirements like security and customer impact. and told it to make a new sprint based on those criteria.  It did a good job based on the criteria I gave it.


This type of stuff can consume endless hours of PM time so it’s interesting to me to see how much automation I can stuff into my toolchain.  I’m not really interested in taking on a brand new toolchain—I already know how to use GitHub and Jira—but I’d like my AI assistant to get in there and do some of the grunt work.  For me, this is key: AI should simply use the tools you already use and act as an extended member of your team.  When AI does that, you get huge benefits because you don’t need to retool, you don’t need to retrain everyone and you instantly get benefits from the AI assistant.


The second key feature is that it can make API calls for you or use MCP servers to access products like Jira or Slack.  I created an API token for my GitHub account and asked Claude Code to summarize the code changes to a project:



It was able to pretty accurately summarize the recent commits to this repo, something that’s super handy for PM to do, but not something we usually have time for.  In the past, I would bug my eng team to see if feature X or Y checked into main or whatever.  Now, I can just ask Claude to see if a PR got merged or if the test cases passed.  Very handy and it cuts down on the amount of engineering time I consume with dumb questions.


While I’m pretty comfortable working at a command prompt, it’s not where I normally do my work. I’m usually using things like Slack instead.


It turns out that there is also a Slack integration:


https://github.com/mpociot/claude-code-slack-bot


However, this implementation is Mac only.  Being a Windows user, I was a bit frustrated by this until I realized that I could just fix this myself using Claude Code.  So, I forked the project to my GitHub account and asked Claude Code to fix the problem.  After a bit of back and forth, I found that the original project had a couple of bugs which Claude Code also fixed.  It took me a couple of hours (mostly because I’m not really a developer) but I got it working.  I then asked Claude Code to write a PR so these changes could go upstream to the open source version:


This is not something I could have done on my own.  In fact, in all my years working on open source (my first open source project was OpenStack in 2010), this is the very first code I have ever attempted to upstream.  I have written PRs before but always for things like doc bugs or other non-code items.  


With the Slack integration working, Claude Code is simply another team member on Slack, doing things I ask.  Note that it is running on my Windows machine now:  



Of course, you can combine these actions.  Let’s say you wanted it to implement a specific Jira ticket.  In this case, I asked Claude Code to take a specific Jira ticket, look at the code base and write a plan about how that ticket could be implemented:



Now I can take this plan and discuss it with my engineering team.  Does this plan make sense?  Does it break things?  And so on.  


Of course, I can also have Claude Code just do the work.  After approving the plan, I had it open the PR directly via the GitHub integration:



Unsurprisingly, Claude Code is really good at creating a PR and making the change.  It talks to Git natively and correctly creates a branch, commits changes, and so on.  Since this is the original purpose of Claude Code, that’s not terribly surprising.


In summary, if you’re willing to put up with a command line interface, you can pretty easily build a custom AI assistant for your project that talks to Jira, Git, Slack or whatever you want and does work across those platforms.  Note that I didn’t write any code here, I just gave Claude Code instructions and it did the work.


IMPORTANT NOTE:  If you are using Claude Code or any other tool to access APIs, please be careful how you manage your API keys or other security information.  Do not hard code API keys into code and do not upload your passwords or keys into GitHub.  The safest way to do this is to use a secrets manager or store them for local use in an .env file or in an environment variable.




Here we go again

 


I recently attended an AI conference in San Francisco.  During the conference, several speakers cited this MIT report: State of AI in Business 2025.  They seemed shocked at the study’s headline finding that 95% of GenAI projects fail to show a financial return.


I wasn’t shocked.


Hopefully, if you read my blog regularly, you weren’t shocked either.


Some things don’t change.  Technology adoption in enterprise has some well worn footpaths and this is one of them.  When something like GenAI captures the public eye, there is pressure to simply adopt that thing.  Teams are told, “You need an AI strategy” or similar.  Often, the people giving that instruction have no idea about what AI really is or what it does.  Thus, the people receiving the instructions have very little context about the outcome that their leadership wants to achieve.  If your goal is to simply adopt AI, then you can do so.  It’s not hard to bring a chatbot or other LLM-based tooling into your organization.


What the MIT study asked was about value.  Did these companies achieve positive ROI from these investments?  Well, no.  Probably because the teams involved were not told to do that.  They were simply told to go get AI. Which they did.


As I discussed in my previous blog post, Why You Need an AI Business Plan, AI, just like any other technology you bring into your organization, needs a business plan to go with it.  The “why” question is the critical question to address before you begin the project along with what and how.  Why are you adopting AI?  What benefits do you expect your company to experience?  How will you know when those benefits have been achieved?


Without answering these questions, you’re just pouring money down the drain. 


Here are six things you need to do BEFORE you agree to adopt a technology:


  1. Define the key stakeholder.  Who will benefit most from this project?  Are you doing this to save money?  Then finance (for example, the CFO) is a likely sponsor.

  2. Define the outcomes.  After this project is done, what is different?  This is often the focus of PR/FAQs, but you can use any format you want.  The point is to put down on paper what happens if we do this project.

  3. Define the measures.  If you know the positive benefits you’ll get, how will you measure this?  You’ll need to take a baseline first or you won’t know if your project moved the needle or not.  Are you currently measuring this?  If not, start right now.

  4. Working backwards from measures, design the project.  Only after you know the result you are trying to achieve, do a high-level architecture and solution concept document.  “We are going to build X to achieve Y” is always how you want to start any design.

  5. Iterate.  Break the project down into small discrete steps.  Make sure that you are achieving the claimed benefit early in the process.  Don’t wait for a magical “big bang” at the end.  You should be making progress at every sprint.  You should be measuring this progress.  

  6. Pivot.  You were probably wrong in steps 1-5, above.  Evaluate your mistakes, adjust, replan.  You won’t really know what the heck you are doing until you do it.  So, take small steps and evaluate your performance as you go.  You’ll be wrong.  That’s OK.  Adjust.


The really big thing here is to expect and embrace failure.  In many organizations the team is not rewarded for declaring failure.  That’s a very unhealthy way to manage.  As an example, if you ask a team to try to reduce cost by 15% and they come back six weeks later and say, “Hey, that goal isn’t possible. We should cancel this project and do something else,” that’s a positive result.  They just figured out that this thing won’t work.  However, if you punish them for failure, they’ll keep working on it for months or years, only to fail after you’ve spent untold amounts trying to build something that cannot work.


Think about this another way.  If you are a manager, how many people have you promoted for killing a major project?  How many have you promoted for delivering the impossible?  My guess is none and tons, respectively.  This means you are rewarding hero culture.  Instead of encouraging people to be clear eyed and dispassionate, you encourage them to take risky bets because they have personal career development goals on the line.  Is it surprising that failure is the most likely outcome if you have a culture like that?  It shouldn’t be.  Yes, the team should be willing to take risks.  No they shouldn’t force a bad hand because they’re afraid to tell you the truth. 


Like I said, this isn’t new.  I’ve experienced similar bubbles in my career a couple of times before.  During the .com boom in the late ‘90s and early 2000s, companies rushed to get on the internet with zero idea about what the internet was for.  Later, cloud adoption mandates drove all kinds of low-value projects.  This is what prompted me to write my book, Why We Fail which discusses enterprise cloud adoption and the issues just mentioned.


Perhaps I should come out with a second edition about AI adoption.  I could call it “Here We Go Again.”



Monday, October 6, 2025

The Time Trap

 


Your time estimates are wrong.  Don’t try to fix them.


Yeah, I said that.


I have been part of millions of planning meetings. I have seen thousands of project plans. The time estimates in them are all wrong.


All of them.


Instead of spending time coming up with a very precise estimate for how long it will take to build a feature, spend about five minutes working with engineering to figure out if the feature is "Large", "Medium" or "Small" (otherwise known as T-shirt sizing) Define those terms by talking about the number of sprints required to get them done.   You’re still wrong, but you didn’t waste days or weeks coming up with the wrong estimate. 


For example:


Small = 1-2 sprints

Medium = 3-9 sprints

Large = 10 or more sprints


Once you accept that the estimate is wrong, you can move on to planning with uncertainty.  When you stack rank, you know that "Small" things at the top of the stack will likely ship this quarter but not definitely. Similarly, "Large" items at the bottom of the stack aren't shipping any time soon. This drives your roadmap. Estimate by quarter how much work you will get in those chunks based on stack rank.  Of course, smaller chunks are better than larger chunks.  If possible, take a look at the large items and see if you can break them down further.  The smaller the chunk, the more accurate your estimate will be.


Is your estimate correct? No, it is not.


If you spend weeks developing a time estimate, will it be correct? No, it will not be correct.


Is it possible to be 100% correct in your estimate? No.


So, instead of investing a lot of time, take a quick stab at it, base your estimates on some objective reality and move on. It's wrong, but that's OK. The broad estimates can still give you an idea of how many things on the roadmap will be accomplished in a quarter.  The reality is that you are way better off just accepting this level of uncertainty than trying to get to a “correct” answer that you likely will never find.  For a PM, this discussion is usually driven by roadmap.  Your execs want to know when a certain feature will ship.  Don’t make the mistake of committing your product team to a specific ship date for a feature under development.  PM doesn’t commit to shipping deadlines.  Only engineering can ship a feature.  Let them answer the “when” question.


What you want to do is be very transparent about your stack rank and the size and complexity of features. Small features with low risk and high priority should show up soon. If they don't, there is something wrong. Focus on that.  This type of discussion can inform your leadership of what you’re working on and why and give them a view of where the effort and risk is. That’s what they really need to know.


What you should be presenting instead:


  1. A good stack rank. Is eng executing against the stack rank? If not, stop. Until you agree with eng on stack rank, nothing else matters.

  2. A good view of complexity. Are "easy" features really easy? If eng says "piece of cake" do you get the feature quickly? If not, stop. If you don't have a good view of complexity and risk, you cannot estimate with any accuracy.

  3. A VERY VERY rough idea of time. It's way more important to know that X comes before Y. If you say X is six weeks out and it ships in eight weeks, does it matter? Not really. If you tell sales that Y is "THE MOST IMPORTANT THING ON OUR ROADMAP" but you don't ship it? You have a problem.


If you have those three things, you can build a roadmap and get leadership buy-in. 


I’ve said this before, but it bears repeating.  There are really only two things that distinguish a good product team from a poor product team.  Those are quality and velocity.  If you are executing with a high degree of quality at a high velocity, your odds of success are much higher than if you are operating with low quality and/or low velocity.  As a product management leader, quality and velocity should be the two things you focus on the most.  What you should not do is try to manage velocity with deadlines.  Artificially setting dates simply lowers quality and doesn’t really address the underlying velocity problem you’re trying to solve.  



Do You Have a Solution in Search of a Problem?

 


Do You Have a Solution In Search of a Problem?


When working with product teams, I often find that they’re working in one of these modes:


They have a problem and are looking for a solution.


Or


They have a solution and are looking for a problem.


As product managers (PMs) we are supposed to be focused on customers and their problems.  However, it’s not uncommon to find product teams that start with a solution and then look for problems that their solution solves.


That’s not a great place to be.


First of all, if you already have a solution in mind, that drives the discussion.  There is a reason why “if all you have is a hammer, the whole world looks like a nail” is such a common phrase.  It’s just human nature to try and make what you have work.  In product management circles, we tend to use the term confirmation bias but it’s the same idea.  If I go into a customer meeting with something specific to sell, I can usually find reasons why the customer needs that thing.  Which is fine, if you’re in sales.


However, if you’re trying to decide what to build, that’s the worst thing possible.  All you’re doing is selling the thing you already have.  That may or may not mean that you have the thing the customer really needs.


On the other hand, if you walk into the meeting looking for problems, you are now open to whatever the customer has to tell you.  I have spent the majority of my career working on enterprise-class software, so other areas may be different, but I can tell you that enterprise customers are more than happy to tell you their problems.  However, most enterprise software companies are not really listening.  What you see all the time is product teams trying to sell what they have on the truck instead of trying to figure out what the customer really needs.


And that’s a shame.


Here are some symptoms of solutions looking for problems:


  1. Engineering builds a prototype that you didn’t ask for.  It’s pretty common for high-performing engineering teams to come to you with great ideas.  Sometimes those ideas are truly amazing and you learn things.  However, most times, these solutions aren’t actually aligned to customer requirements.  Now you have an amazing idea and you go around trying to figure out how to use it.

  2. Leadership “vision” is divorced from customer reality.  If senior leadership is driving the roadmap, it’s common for the roadmap to  become divorced from customer requirements.  Good leadership will listen to the field and adjust based on what they hear.  Yes, you need to have a strong product vision. No, you should not ignore what customers are telling you.

  3. New ideas for customer problems to be solved are automatically quashed.  Yes, I admit I have done this.  If you are leading a product team with very tough goals, you have to remain focused.  Sometimes that means quashing or ignoring customer requirements.  All I can say is, know you’re doing this and do it on purpose if you need to do it.  You can get away with this for a while, but there are limits.

  4. PMs are not meeting directly with customers.  Several times in my career I’ve been told by sales that I don’t need to talk to the customer—that sales will tell me what the customer wants.  With all due respect, that’s not acceptable.  PM needs to speak to customers DIRECTLY and regularly.  Without direct contact, PM will make poor decisions.


So, what is a PM leader to do about all this?  How can you shift gears and move away from solutions and towards problems?


  1. Prioritize quality customer time.  I cannot emphasize enough how vital this is.  Unless you are talking directly to customers in relatively unstructured settings, you simply do not know what they want.  Spend the time.  Focus on their needs.  Ask questions.

  2. Put customers first.  When meeting with customers, always make time to discuss their issues and ask about their problems..  I have been in hundreds of customer meetings where I am expected to go over the roadmap in detail in just thirty minutes.  That just isn’t enough time.  I need at least fifteen minutes to ask questions.  Therefore, my minimum customer meeting is an hour.  More is better.  

  3. Focus on why.  When sales (or anyone else) says that customers want X, always ask “why do they want X?”  You need to dig down into the underlying customer needs to really understand the requirements and develop a solution to the actual problem.

  4. Have strong opinions, loosely held.  This phrase always stands out in my mind as the definition of PM.  As a PM I have to be very firm in my opinion.  I have to have a reason for it.  I must be able to defend it.  On the other hand, I have to be willing to abandon my position given evidence that it’s wrong.  I can’t fall in love with my own plan.


As usual, when we get off track it’s because we forget why we are here.  As a PM team, our only job is to make the product better by serving the customer.  We don’t do that through surface interaction.  When I joined Splunk many years ago, I adopted a huge (1000+ user stories) backlog of “customer requirements” that the team had been working on.   The first thing I did is insist that I meet directly with customers who had these problems.  Every time I talked to one of these customers, I learned something.  The product got better as a result.  The PM is not a feature vending machine where you put requirements in and get features out.  The only true measure of a successful product is customer adoption.  If the customer likes and uses the feature, it’s a good feature, full stop. 


Sunday, August 10, 2025

Yes, you do need Product Management

 


The massive rise in GenAI-based coding tools has created a huge amount of discussion and debate in engineering circles.  Of course, this also affects product managers (PMs) in new and interesting ways also.


Wild claims are being made that PMs are no longer needed to build software, such as this one by Lovable:



The reason why Lovable doesn’t need PMs is not because of AI.  Lovable doesn’t have PMs because they are building a developer product with a very small team.  This means that the developers on the team already know the user persona.  Thus, they don’t need persona development, use case research, etc., etc., etc.  They are also very small.  Smaller teams don’t need some of the cross team alignment work that PMs do.


However, the bulk of what PMs do will not and cannot be replaced by AI.


The PM team is the team that decides what to build and why.  AI can help you with analysis, but someone still makes the call.  The person who makes that call is called the product manager, regardless of what their actual job title is.


Google on the other hand has made a conscious decision to stop writing documents.  I’m not sure if this is a good idea or not, but it has nothing to do with the PM job going away.  Just the opposite.  What Google is saying is that they expect PM to produce prototypes.  That is to say, PMs will add more value by taking their ideas to prototype instead of just writing requirements.  This is all about AI.  


While I am a big fan of long form writing, the entire point of things like PRDs or PR/FAQs is to help engineering understand the business requirements from PM.  The actual documents are not really important.  What is important is making sure that engineering understands you.  This means that it cannot be a PM decision alone to decide if the team should use a PRD or not.  It must be a joint decision between engineering (the consumer) and PM (the producer).  Just like any other customer, you need to really understand your engineering organization’s needs so that you can meet those needs.


This takes us back to AI-based prototypes.  The decision to move to AI-based prototypes needs to be a joint decision between PM and engineering.  If your engineers feel that the prototypes would be more helpful than PRDs, then they are probably right.  Go try it.  If engineering tells you they really prefer PRDs, you need to accept that preference.


I’ve worked in both very writing-heavy and very mockup-heavy organizations.  If your engineering team has zero interest in PRDs but loves Figma, then AI-based prototypes are probably a good idea.  In most ways a prototype is better than a mock up because it’s actual running code.


Of course, there are requirements that you can’t describe with a prototype.  Especially non-functional requirements like performance and security—you will need to write those things down.  They can be relatively short, but they do need to be formalized. 


In the end, this is about outcome.  Focus on helping engineering understand what needs to be built.  Focus on them and their needs, not on you and your preferences.  Once they understand the requirements, you’re done.  


Sunday, August 3, 2025

Users lie. Don’t fall for it.


When conducting customer interviews, one of the classic errors I see product managers (PM’s) make is to ask customers what they want.  I cannot tell you how many times I’ve been in a feature validation meeting and the PM leading the session asks, “Would you use this feature if it was in the product?”


I’m sorry, but that’s a really bad question to ask.  Don’t do that.


Here’s the deal.  The users don’t really know.  They also have agendas.  They also forget things.  They also lie sometimes.


Many moons ago, I was working on a strategic infrastructure product.  As part of this product, we were trying to decide what features to add in what order.  Normal PM work.  One of the features we were thinking about was a disaster recovery (DR) feature. The team responsible for backup and DR was really excited about this feature.  They did a survey of existing customers that asked, “If we had this DR feature, would you use it?”  Not exactly that question, but pretty close.  50% of our install base said, “Yes, we would use it.”


Here’s the problem though.  We already knew that our market wasn’t spending money on DR.  We had revenue numbers that showed that our existing DR product wasn’t selling well.  This lit alarm bells on my team.  Huge argument ensues.  We spend MONTHS debating this.  Finally we decide to build the feature.


The feature falls flat on its face.  Nobody wants it.  We really struggle to get even one customer deployed.  Months go by and we finally do some proper research.  It turns out that all these customers know that they should do DR, but they don’t.  Auditors tell them they should do it, their exec leadership says they should do it, but when it comes down to actual planning, they just don’t make time and it falls off the work plan.  Thus, any DR solution has to address this critical time and staffing issue or it won’t get adopted.


Sigh.  Back to the drawing board. 


So, how do you learn from my costly mistake?


Customer needs vs. wants


You need to get away from stated customer desire.  In the end, it really doesn’t matter what they want.  Hell, I want to look like Hugh Jackman.  I don’t and I won’t.  I need to be healthy and exercise, so that’s what I actually do.  When you talk to a customer, it’s really easy to find out what they want.  They’re usually happy to tell you.  But this is just surface clutter.  The entire point of a customer interview is to dig down and understand customer need.  What is it that they need to be true?  Can you satisfy that need?  Can you make their pain go away?  If so, you have a customer for life.


In the previous example, they said they wanted DR.  Maybe they did, maybe they didn’t.  What they needed was for audit and senior leadership to get off their back.  So, we introduced a feature that automatically made their data resist a single Data Center failure.  Customers loved it.  They did no work and their auditors were happy.  The feature sold amazingly well.


Here are some ways you can dig down into need and away from want:


  • Always ask how it is done now.  How long does this thing take?  How much does it cost?  Who does the work?  Based on our proposed solution, would it be cheaper/faster/better than it is right now?  Knowledge of current state helps you understand your value prop.  You save them five minutes?  Meh.  You save them a million bucks?  Hell, ya.

  • Focus on them, not on you.  Who cares how your product works or what your roadmap is?  Worry about them getting promoted.  What can you do in the product to make this person a rockstar?  How do they get promoted for using your product?  Going into a customer meeting, giving a demo and walking out is a complete waste of PM time.  That should have been done by an SE.  Yes, you have to pay for their time by telling them what they want to know about the roadmap, but no, you’re not there to sell.  Don’t talk about you, your team and all the hard work you’ve been doing.  Ask about them and how they are doing.

  • Talk to the right person.  Do you know who your buyer persona is?  Your user persona?  Is this person one of those two?  If not, why are you talking to them?  Does it matter what someone who doesn’t use your product and doesn’t have the problem you are trying to solve thinks?  No, it doesn’t.  Make sure you know who you are talking to and why.

  • Ask them to show you.  One time when I worked for VMware, I was on site with a German retailer and we were discussing a new feature in a conference room for over an hour.  I just wasn’t getting it.  Finally, I said, “Who does this now?”  Dieter does this now.  “Where is Dieter?”  Dieter works down the hall.  “Can we go see Dieter?”  Yes, right this way.  Man, I learned more in the next 30 minutes than I had learned in a month.  Amazing.  Cherish those opportunities.  

  • Always force them to make a tradeoff.  Never ask, “Do you want A?”  Always say, “If you could have A or B, which would you choose?”  Assuming that B is a good feature, picking A over B means that A is also pretty good.  Of course, this test only works if B is actually good.  So, pick B carefully, just because you love B doesn’t mean that B is a killer feature.  Pick something you’re pretty sure the customer loves.  Make them kill their own children to get A.  If they’re willing to do that, then A is pretty good. 

  • Always ask why.  If a customer tells you something or states a preference, always ask why they have that preference.  So, if a customer says “I really want you to integrate with Jira,” you need to know why they want to integrate with Jira.  Is this about speed?  Cost?  Can you see a sample Jira ticket?  How many times has Jira integration come up this week?  What would happen if I don’t integrate with Jira?  Is there an existing corporate mandate that they must use Jira?  And so on.

  • Always aggregate.  A single customer, even a really big customer, saying that they want something is interesting but ultimately irrelevant.  If you find that a high percentage of customers have the same underlying pain points that you can resolve with feature X, now you have something meaningful.

  • Trust, but verify.  Anything that you’re told about user behavior is subject to misunderstanding, false reporting or other issues.  Don’t guess.  If they say, “We use feature X daily,” go check.  If you don’t have enough reporting in the product to know what features they use, go fix that first.  I have had multiple people tell me that features were highly valued by customers, only to find out in product telemetry that the feature never got used.


Of course, I’ve seen PM teams who have the opposite problem also.  Teams that completely ignore customer preferences and needs.  I’ve actually been told, “Nobody wanted the iPhone, but Steve Jobs built it anyway.  This is our iPhone moment.”


No.


You are not Steve Jobs.  You are not inventing the iPhone.  Just no.  


You cannot ignore reality.  If customer need isn’t there, the odds of you making the product successful are super low.  Yes, you could get lucky, but do you want to rely on luck?  There are millions of actual pain points inside customer organizations right now.  Millions.  Just pick one.  Solve the pain point.  Iterate.   This is the only way to consistently build software your customers want and will pay for.