Monday, October 13, 2025

Claude Code Micro Review


I’ve recently come across product managers (PMs) who are using Claude Code for PM tasks like document review.  This interested me because I assumed that Claud Code was optimized for, well, code.


It turns out that despite the name, Claude Code is just the CLI version of Claude for desktop.  It works a little differently than the desktop version and has some nifty features that the desktop version doesn’t have. It turns out that it’s pretty darn useful for doing PM work.  I find this fascinating because you would think that Anthropic would add those features to the desktop product.  Perhaps they’re working on it?


There are two key features that really interest me in Claude Code.  One is the /init feature.  As I’ve discussed before, context is king when it comes to AI.  Claude Code has a very nice feature that I’m shocked to find missing from most AI assistants.  When you use Claude Code and issue the /init command, what happens is that it scans the directory you’re in and creates a Claude.md file:



The Claude.md file is basically instructions for what this directory is and what you want Claude to do.  This way, you can set rules for Claude.  The original intent was that you would use Git to grab a repo, have Claude Code scan the repo and then set rules like “always use React” or “conform to coding best practice Y” or some such.  However, since the underlying Claude functionality is all there inside of Claude Code, you can use it for pretty much anything you like.


So, you could download all of your customer interview transcripts into a directory, have Claude Code read them and then ask questions about them.  Very cool functionality for a PM.  Since it sits on the Claude platform you can also use MCP to talk to external sources like Jira:



In this case, I connected it to Jira and asked it to review my current backlog and to create a new sprint focusing on the highest priority items.  I gave it some basic requirements like security and customer impact. and told it to make a new sprint based on those criteria.  It did a good job based on the criteria I gave it.


This type of stuff can consume endless hours of PM time so it’s interesting to me to see how much automation I can stuff into my toolchain.  I’m not really interested in taking on a brand new toolchain—I already know how to use GitHub and Jira—but I’d like my AI assistant to get in there and do some of the grunt work.  For me, this is key: AI should simply use the tools you already use and act as an extended member of your team.  When AI does that, you get huge benefits because you don’t need to retool, you don’t need to retrain everyone and you instantly get benefits from the AI assistant.


The second key feature is that it can make API calls for you or use MCP servers to access products like Jira or Slack.  I created an API token for my GitHub account and asked Claude Code to summarize the code changes to a project:



It was able to pretty accurately summarize the recent commits to this repo, something that’s super handy for PM to do, but not something we usually have time for.  In the past, I would bug my eng team to see if feature X or Y checked into main or whatever.  Now, I can just ask Claude to see if a PR got merged or if the test cases passed.  Very handy and it cuts down on the amount of engineering time I consume with dumb questions.


While I’m pretty comfortable working at a command prompt, it’s not where I normally do my work. I’m usually using things like Slack instead.


It turns out that there is also a Slack integration:


https://github.com/mpociot/claude-code-slack-bot


However, this implementation is Mac only.  Being a Windows user, I was a bit frustrated by this until I realized that I could just fix this myself using Claude Code.  So, I forked the project to my GitHub account and asked Claude Code to fix the problem.  After a bit of back and forth, I found that the original project had a couple of bugs which Claude Code also fixed.  It took me a couple of hours (mostly because I’m not really a developer) but I got it working.  I then asked Claude Code to write a PR so these changes could go upstream to the open source version:


This is not something I could have done on my own.  In fact, in all my years working on open source (my first open source project was OpenStack in 2010), this is the very first code I have ever attempted to upstream.  I have written PRs before but always for things like doc bugs or other non-code items.  


With the Slack integration working, Claude Code is simply another team member on Slack, doing things I ask.  Note that it is running on my Windows machine now:  



Of course, you can combine these actions.  Let’s say you wanted it to implement a specific Jira ticket.  In this case, I asked Claude Code to take a specific Jira ticket, look at the code base and write a plan about how that ticket could be implemented:



Now I can take this plan and discuss it with my engineering team.  Does this plan make sense?  Does it break things?  And so on.  


Of course, I can also have Claude Code just do the work.  After approving the plan, I had it open the PR directly via the GitHub integration:



Unsurprisingly, Claude Code is really good at creating a PR and making the change.  It talks to Git natively and correctly creates a branch, commits changes, and so on.  Since this is the original purpose of Claude Code, that’s not terribly surprising.


In summary, if you’re willing to put up with a command line interface, you can pretty easily build a custom AI assistant for your project that talks to Jira, Git, Slack or whatever you want and does work across those platforms.  Note that I didn’t write any code here, I just gave Claude Code instructions and it did the work.


IMPORTANT NOTE:  If you are using Claude Code or any other tool to access APIs, please be careful how you manage your API keys or other security information.  Do not hard code API keys into code and do not upload your passwords or keys into GitHub.  The safest way to do this is to use a secrets manager or store them for local use in an .env file or in an environment variable.




Here we go again

 


I recently attended an AI conference in San Francisco.  During the conference, several speakers cited this MIT report: State of AI in Business 2025.  They seemed shocked at the study’s headline finding that 95% of GenAI projects fail to show a financial return.


I wasn’t shocked.


Hopefully, if you read my blog regularly, you weren’t shocked either.


Some things don’t change.  Technology adoption in enterprise has some well worn footpaths and this is one of them.  When something like GenAI captures the public eye, there is pressure to simply adopt that thing.  Teams are told, “You need an AI strategy” or similar.  Often, the people giving that instruction have no idea about what AI really is or what it does.  Thus, the people receiving the instructions have very little context about the outcome that their leadership wants to achieve.  If your goal is to simply adopt AI, then you can do so.  It’s not hard to bring a chatbot or other LLM-based tooling into your organization.


What the MIT study asked was about value.  Did these companies achieve positive ROI from these investments?  Well, no.  Probably because the teams involved were not told to do that.  They were simply told to go get AI. Which they did.


As I discussed in my previous blog post, Why You Need an AI Business Plan, AI, just like any other technology you bring into your organization, needs a business plan to go with it.  The “why” question is the critical question to address before you begin the project along with what and how.  Why are you adopting AI?  What benefits do you expect your company to experience?  How will you know when those benefits have been achieved?


Without answering these questions, you’re just pouring money down the drain. 


Here are six things you need to do BEFORE you agree to adopt a technology:


  1. Define the key stakeholder.  Who will benefit most from this project?  Are you doing this to save money?  Then finance (for example, the CFO) is a likely sponsor.

  2. Define the outcomes.  After this project is done, what is different?  This is often the focus of PR/FAQs, but you can use any format you want.  The point is to put down on paper what happens if we do this project.

  3. Define the measures.  If you know the positive benefits you’ll get, how will you measure this?  You’ll need to take a baseline first or you won’t know if your project moved the needle or not.  Are you currently measuring this?  If not, start right now.

  4. Working backwards from measures, design the project.  Only after you know the result you are trying to achieve, do a high-level architecture and solution concept document.  “We are going to build X to achieve Y” is always how you want to start any design.

  5. Iterate.  Break the project down into small discrete steps.  Make sure that you are achieving the claimed benefit early in the process.  Don’t wait for a magical “big bang” at the end.  You should be making progress at every sprint.  You should be measuring this progress.  

  6. Pivot.  You were probably wrong in steps 1-5, above.  Evaluate your mistakes, adjust, replan.  You won’t really know what the heck you are doing until you do it.  So, take small steps and evaluate your performance as you go.  You’ll be wrong.  That’s OK.  Adjust.


The really big thing here is to expect and embrace failure.  In many organizations the team is not rewarded for declaring failure.  That’s a very unhealthy way to manage.  As an example, if you ask a team to try to reduce cost by 15% and they come back six weeks later and say, “Hey, that goal isn’t possible. We should cancel this project and do something else,” that’s a positive result.  They just figured out that this thing won’t work.  However, if you punish them for failure, they’ll keep working on it for months or years, only to fail after you’ve spent untold amounts trying to build something that cannot work.


Think about this another way.  If you are a manager, how many people have you promoted for killing a major project?  How many have you promoted for delivering the impossible?  My guess is none and tons, respectively.  This means you are rewarding hero culture.  Instead of encouraging people to be clear eyed and dispassionate, you encourage them to take risky bets because they have personal career development goals on the line.  Is it surprising that failure is the most likely outcome if you have a culture like that?  It shouldn’t be.  Yes, the team should be willing to take risks.  No they shouldn’t force a bad hand because they’re afraid to tell you the truth. 


Like I said, this isn’t new.  I’ve experienced similar bubbles in my career a couple of times before.  During the .com boom in the late ‘90s and early 2000s, companies rushed to get on the internet with zero idea about what the internet was for.  Later, cloud adoption mandates drove all kinds of low-value projects.  This is what prompted me to write my book, Why We Fail which discusses enterprise cloud adoption and the issues just mentioned.


Perhaps I should come out with a second edition about AI adoption.  I could call it “Here We Go Again.”



Monday, October 6, 2025

The Time Trap

 


Your time estimates are wrong.  Don’t try to fix them.


Yeah, I said that.


I have been part of millions of planning meetings. I have seen thousands of project plans. The time estimates in them are all wrong.


All of them.


Instead of spending time coming up with a very precise estimate for how long it will take to build a feature, spend about five minutes working with engineering to figure out if the feature is "Large", "Medium" or "Small" (otherwise known as T-shirt sizing) Define those terms by talking about the number of sprints required to get them done.   You’re still wrong, but you didn’t waste days or weeks coming up with the wrong estimate. 


For example:


Small = 1-2 sprints

Medium = 3-9 sprints

Large = 10 or more sprints


Once you accept that the estimate is wrong, you can move on to planning with uncertainty.  When you stack rank, you know that "Small" things at the top of the stack will likely ship this quarter but not definitely. Similarly, "Large" items at the bottom of the stack aren't shipping any time soon. This drives your roadmap. Estimate by quarter how much work you will get in those chunks based on stack rank.  Of course, smaller chunks are better than larger chunks.  If possible, take a look at the large items and see if you can break them down further.  The smaller the chunk, the more accurate your estimate will be.


Is your estimate correct? No, it is not.


If you spend weeks developing a time estimate, will it be correct? No, it will not be correct.


Is it possible to be 100% correct in your estimate? No.


So, instead of investing a lot of time, take a quick stab at it, base your estimates on some objective reality and move on. It's wrong, but that's OK. The broad estimates can still give you an idea of how many things on the roadmap will be accomplished in a quarter.  The reality is that you are way better off just accepting this level of uncertainty than trying to get to a “correct” answer that you likely will never find.  For a PM, this discussion is usually driven by roadmap.  Your execs want to know when a certain feature will ship.  Don’t make the mistake of committing your product team to a specific ship date for a feature under development.  PM doesn’t commit to shipping deadlines.  Only engineering can ship a feature.  Let them answer the “when” question.


What you want to do is be very transparent about your stack rank and the size and complexity of features. Small features with low risk and high priority should show up soon. If they don't, there is something wrong. Focus on that.  This type of discussion can inform your leadership of what you’re working on and why and give them a view of where the effort and risk is. That’s what they really need to know.


What you should be presenting instead:


  1. A good stack rank. Is eng executing against the stack rank? If not, stop. Until you agree with eng on stack rank, nothing else matters.

  2. A good view of complexity. Are "easy" features really easy? If eng says "piece of cake" do you get the feature quickly? If not, stop. If you don't have a good view of complexity and risk, you cannot estimate with any accuracy.

  3. A VERY VERY rough idea of time. It's way more important to know that X comes before Y. If you say X is six weeks out and it ships in eight weeks, does it matter? Not really. If you tell sales that Y is "THE MOST IMPORTANT THING ON OUR ROADMAP" but you don't ship it? You have a problem.


If you have those three things, you can build a roadmap and get leadership buy-in. 


I’ve said this before, but it bears repeating.  There are really only two things that distinguish a good product team from a poor product team.  Those are quality and velocity.  If you are executing with a high degree of quality at a high velocity, your odds of success are much higher than if you are operating with low quality and/or low velocity.  As a product management leader, quality and velocity should be the two things you focus on the most.  What you should not do is try to manage velocity with deadlines.  Artificially setting dates simply lowers quality and doesn’t really address the underlying velocity problem you’re trying to solve.  



Do You Have a Solution in Search of a Problem?

 


Do You Have a Solution In Search of a Problem?


When working with product teams, I often find that they’re working in one of these modes:


They have a problem and are looking for a solution.


Or


They have a solution and are looking for a problem.


As product managers (PMs) we are supposed to be focused on customers and their problems.  However, it’s not uncommon to find product teams that start with a solution and then look for problems that their solution solves.


That’s not a great place to be.


First of all, if you already have a solution in mind, that drives the discussion.  There is a reason why “if all you have is a hammer, the whole world looks like a nail” is such a common phrase.  It’s just human nature to try and make what you have work.  In product management circles, we tend to use the term confirmation bias but it’s the same idea.  If I go into a customer meeting with something specific to sell, I can usually find reasons why the customer needs that thing.  Which is fine, if you’re in sales.


However, if you’re trying to decide what to build, that’s the worst thing possible.  All you’re doing is selling the thing you already have.  That may or may not mean that you have the thing the customer really needs.


On the other hand, if you walk into the meeting looking for problems, you are now open to whatever the customer has to tell you.  I have spent the majority of my career working on enterprise-class software, so other areas may be different, but I can tell you that enterprise customers are more than happy to tell you their problems.  However, most enterprise software companies are not really listening.  What you see all the time is product teams trying to sell what they have on the truck instead of trying to figure out what the customer really needs.


And that’s a shame.


Here are some symptoms of solutions looking for problems:


  1. Engineering builds a prototype that you didn’t ask for.  It’s pretty common for high-performing engineering teams to come to you with great ideas.  Sometimes those ideas are truly amazing and you learn things.  However, most times, these solutions aren’t actually aligned to customer requirements.  Now you have an amazing idea and you go around trying to figure out how to use it.

  2. Leadership “vision” is divorced from customer reality.  If senior leadership is driving the roadmap, it’s common for the roadmap to  become divorced from customer requirements.  Good leadership will listen to the field and adjust based on what they hear.  Yes, you need to have a strong product vision. No, you should not ignore what customers are telling you.

  3. New ideas for customer problems to be solved are automatically quashed.  Yes, I admit I have done this.  If you are leading a product team with very tough goals, you have to remain focused.  Sometimes that means quashing or ignoring customer requirements.  All I can say is, know you’re doing this and do it on purpose if you need to do it.  You can get away with this for a while, but there are limits.

  4. PMs are not meeting directly with customers.  Several times in my career I’ve been told by sales that I don’t need to talk to the customer—that sales will tell me what the customer wants.  With all due respect, that’s not acceptable.  PM needs to speak to customers DIRECTLY and regularly.  Without direct contact, PM will make poor decisions.


So, what is a PM leader to do about all this?  How can you shift gears and move away from solutions and towards problems?


  1. Prioritize quality customer time.  I cannot emphasize enough how vital this is.  Unless you are talking directly to customers in relatively unstructured settings, you simply do not know what they want.  Spend the time.  Focus on their needs.  Ask questions.

  2. Put customers first.  When meeting with customers, always make time to discuss their issues and ask about their problems..  I have been in hundreds of customer meetings where I am expected to go over the roadmap in detail in just thirty minutes.  That just isn’t enough time.  I need at least fifteen minutes to ask questions.  Therefore, my minimum customer meeting is an hour.  More is better.  

  3. Focus on why.  When sales (or anyone else) says that customers want X, always ask “why do they want X?”  You need to dig down into the underlying customer needs to really understand the requirements and develop a solution to the actual problem.

  4. Have strong opinions, loosely held.  This phrase always stands out in my mind as the definition of PM.  As a PM I have to be very firm in my opinion.  I have to have a reason for it.  I must be able to defend it.  On the other hand, I have to be willing to abandon my position given evidence that it’s wrong.  I can’t fall in love with my own plan.


As usual, when we get off track it’s because we forget why we are here.  As a PM team, our only job is to make the product better by serving the customer.  We don’t do that through surface interaction.  When I joined Splunk many years ago, I adopted a huge (1000+ user stories) backlog of “customer requirements” that the team had been working on.   The first thing I did is insist that I meet directly with customers who had these problems.  Every time I talked to one of these customers, I learned something.  The product got better as a result.  The PM is not a feature vending machine where you put requirements in and get features out.  The only true measure of a successful product is customer adoption.  If the customer likes and uses the feature, it’s a good feature, full stop.