Monday, November 3, 2025

Agile and the age of AI

 


Someone asked me yesterday if we can still use agile methods in the age of AI.


My answer was an unequivocal yes.


But then I thought about it a bit.  The reality is that I’ve never worked in a “pure” agile shop.  I’ve run into folks who are truly experts in this area and when I listen to them I realize that most places I’ve worked have adjusted the “pure” agile model around to make it work for them.  It’s to the point that I don’t really know what agile stands for any more.


As a long time product manager for a variety of SaaS applications, I know what a “good” SaaS product looks like.  I’ve worked with very strong engineering teams and I’ve worked with very weak engineering teams.  I’ll say this again for those who are new to my blog series: the only differences between good teams and bad teams are quality and velocity.  Good teams ship with high quality at high velocity.  Thus the real test here is, “Can AI ship high-quality code at a high velocity?”


We know that AI can ship with high velocity.  We’ve all seen the demo.  How do we help AI ship with high quality?


Perhaps not surprisingly, all the things we’ve learned about running high-quality SaaS sites are still true.  Shocker.  Do these apply to AI?  As I discussed in a previous blog post, you can and should set up your AI tooling to use best practices and you should use a similar "begin with the end in mind” strategy as you do with regular software development.


Most modern SaaS teams are running CI/CD (Continuous Integration/Continuous Delivery). While not strictly speaking part of agile methodology, it is something that fits into the larger agile mindset of moving quickly and shipping interim builds.  Interestingly, CI/CD is not something that most AI coding tools support.  If you use a tool like Lovable or V0, you will simply get running code.  This is interesting, but running code is not a product.  SaaS applications are living things.  They change regularly.  This means that you need some way to inject code into your site on a regular basis without breaking things.


Which leads us to…


Testing.


A well-built site has very strong test suites that prevent “regressions” which is what we call a thing that used to work but doesn’t work any more.  If you’ve never worked on a SaaS product, you would be surprised to find out that it is super common for things to just magically break that worked perfectly days, weeks or months before.  Thus, your test suite.  What really confuses me is when I read folks online complaining that their AI tool made some mistake.  That the tool created a new bug or did something else wrong.  Why would I be surprised that AI coding tools create bugs?  Real programmers do this all the time.  It’s the reason why we have things like commit checks and automated testing—to catch these inevitable errors that occur. 


AI, if anything, is even worse about regressions, it has a limited context window so it forgets from one session to the next.  An AI programmer will simply do what you tell it.  If you tell it to fix a bug, that doesn’t imply to their robot brain, “Fix this bug without introducing new bugs.”  No, it just fixes the bug in the most expeditious way possible.  Even if you say things like “do not introduce regressions” in your prompt, it will anyway out of ignorance.  This means that you need a super strong test path that ensures that your coding AI isn’t breaking things every time it makes a change.  Again, most AI coding tools do not work this way.


I would argue that if you don’t have a way to safely ship code that has been carefully and thoroughly tested, you’re in trouble.  This means that any AI product team is going to have issues here unless they address them right up front.


Thankfully, the industry has been working on this problem for years.  There are tons of tools out there expressly designed to help you solve this problem.  In the following discussion, I’ll show examples of what I am using in my personal development projects.  This in no way implies that this is the ONLY way or even the CORRECT way.  This is just the way I’ve used, and it seems to be working for me at this point.


Phase 1:  Begin with the end in mind.


When I begin a new project.  I always start the project with a PRD.  This may seem odd because PRDs are often thought of as “anti-agile” but I’m a big fan of long-form communications.  When I worked at Hashicorp we always used PRDs because we were a remote-first company (this was before COVID when that was rare).  When I am working with Claude Code for example, I usually start with an empty directory and a PRD.  That’s it.


Phase 2: Start from the bottom.


One thing that I see “vibe coding” type solutions do all the time is to build the UI first.  I get why they do that; it’s the thing the user wants to see.  But the problem is that a properly built SaaS application sits on a complex platform.  If that platform isn’t right, you will have all kinds of trouble later.  So, start with the framework and then add features to this framework.  Try a prompt like this:


I would like an architectural framework for building my SaaS business as described in the PRD.  This SaaS application will run on AWS and should make use of appropriate AWS PaaS services.  It will be deployed via Terraform and use GitHub Actions for CI/CD.  Propose a full tech stack for our MVP.  Reference the PRD in the docs and put your recommended architecture in a file called architecture.md in the docs folder.  Keep it simple, this is just the MVP and we can add advanced features later.


This prompt caused Claude Code to build me an architecture plan that I could then review and edit.  I made some changes, but the plan was pretty decent.  Notice that I’m an opinionated consumer.  I know Terraform because I used to work at Hashicorp.  I’ve also been working with AWS for over 15 years so I’ll want to host my application there.  I chose GitHub Actions because it’s the easiest way for me to have a full CI/CD platform.  You can make different choices, but the point is that you need to make choices.  These choices will dictate your ability to ship features later, so they do matter.


Phase 3:  Make the rules.


After I had those two documents (the PRD and the architecture plan) I ran Claude /init in that directory.  Claude read those two documents and realized that I was very early in a software project and populated the Claude.md file appropriately.  Again, I had to review this file and update it.  There were several things I wanted the project to do and most importantly several things that I DIDN’T want the project to do.  So, you do want to read that file carefully.


Phase 4: Testing framework.


Before you allow the system to write any code, you need to have a testing framework.  Because AI tools tend to just make stuff up as they go, you really have no idea what they’re going to do.  Thus, you really need to have some sort of testing in place that keeps them on track.  Especially if you’re not a full-time developer, you won’t be able to just read the code and tell the system the code is OK or not.  In my case, I haven’t written code professionally for over twenty years so I’m not really qualified to review the AI’s code.  Again, testing.


Phase 5: Planning


One handy pattern that I use with Claude is to ask it to plan first, and then work.  This means that when you are about to do something major like creating the basic framework for your application, you want Claude to carefully plan it out first.  As I did for the architecture plan, I asked Claude to create a comprehensive step-by-step plan of how to build the application framework.  Then I took that plan, reviewed it and broke it down still further.  I took each phase of the plan and again asked Claude to give me a detailed plan for that section of the plan.  This iterative planning process seems to provide a better result for me than just doing a one shot.  Remember that Claude and other tools have a limited context window.  That means that smaller tasks are more likely to be completed successfully.  For my convenience, I usually ask Claude to write the plan into a MD file and I keep all those plans.  Then, if something goes wrong later, I can say, “Open planning document X.  Compare current state to that document.  What is wrong with the current implementation?”   This forces Claude to think about what was supposed to happen and then reflect back on what the current state is.


Phase 6:  Best Practice


After Claude has built a working prototype, you need to figure out if this thing is any good.  If you’re not an expert in things like Terraform, it may be hard to figure out if the implementation is decent or not.  One trick is to find high-quality best practice documents.  For example, AWS has a great Terraform best practice document.  I took that document, downloaded the PDF and put it into my docs directory.  I then asked Claude to read that document and compare our implementation to that.  It came back with some very concrete things we could be doing better.  


Phase 7: Iterate


As you learn, you loop back up into planning mode above.  You’ve probably made mistakes and the AI has certainly made mistakes.  Just like any project I’ve worked on, AI tools require me to iterate frequently to refine and repair what has been done.


At this point, you have a real software project.  It’s not ready for prime time yet, but you have a basic structure that allows you to create features, test them and push them into production.  Your AI software project is now in a better place than half the software teams I’ve worked with in the past.  Congratulations.  


Friday, October 31, 2025

If it doesn’t suck, is it still vibe coding?







Like most of us who have been in the industry for a while, I’m fascinated with the concept of vibe coding. If you read this blog, you know that I have used the majority of AI tools out there in the market. Initially, I only attempted to use them to create prototypes. As a product manager (PM), that’s all I really care about. I don’t write code, I don’t manage production sites, I don’t own the release process, etc. etc. etc.

However, the quality of the tools continues to improve. This leads me to the obvious question: Can you actually produce high-quality software using AI?

I think the answer is yes.

As I work with AI, I’ve come to the conclusion that the trick is to treat AI as a very bright but extremely inexperienced and naive colleague. (See what I did there? Trick or treat?) With apologies to college students everywhere, I tend to think, “Would an intern understand this instruction?” And if the answer is no, I dumb it down until I think the intern would understand. This technique has worked well for me in previous interactions with AI, so my assumption is that it will work well in having AI build software from scratch.

Is that true? Let’s find out.

First, when building any software, I always start with the end in mind. Carefully lay out the goals of the project. How do we know if we’re successful? If I am working with an experienced engineering team, this discussion can be pretty brief. I could say something like, “I want the site to be enterprise grade and support 1,000 concurrent users” and the team would probably build a site that scales well, is secure, has a good login system, monitoring, and so on. There are tons of things that I assume a good SaaS site has and my engineering team knows all that.

However, AI doesn’t know all that. So, what to do?

Well, I know what all those requirements are. I just need to write them down. We call that a product requirements document (PRD). Yes, the same document that we used to write all the time. Some product organizations don’t use PRDs any more and I haven’t written one in five years, but I’ve written hundreds of them in my lifetime. Start there.

A good PRD includes business goals but it also includes nonfunctional requirements like the site must be secure, the site must be tested, only authorized users can use the site. Usually, if I am working with a very professional team I skip all those things. They already know that, why repeat them over and over? But again, AI isn’t experienced, and doesn’t know things. So, I listed all those things.

After writing a detailed PRD, I used Claude Code. As I mentioned in the Claude Code Micro Review, it allows me to give it rules that it compiles in the Claude.md file. So I created a new directory, put the PRD in there and told Claude Code to write up an architecture document based on the PRD. Surprise, I didn’t tell it to write code, I didn’t tell it to make a site, I just said, “What would the architecture be?”. Here’s the actual prompt:

I would like an architectural framework for building my SaaS business that will connect authors to readers. This SaaS application will run on AWS and should make use of appropriate AWS PaaS services. It will be deployed via Terraform and use GitHub Actions for CI/CD. Propose a full tech stack for our MVP. Reference the PRD in the docs and put your recommended architecture in a file called architecture.md in the docs folder. Keep it simple, this is just the MVP and we can add advanced features later.


And this is what I got back:







The document I got back was decent, but I wasn’t happy about all the choices it made. No worries, I reviewed the document, made some changes and now I had a rough sketch of how the site would be architected.

OK, so then I typed Claude /init into the CLI and Claude read the directory. It saw the PRD, it saw the architecture and it said, “OK, you’re in the early stages of development for a SaaS site, let me help you set that up.” It proposed a phased implementation plan that again I reviewed and made changes to as shown here




The pattern is this: set a high-level goal, ask for a plan to achieve this goal, review the plan, repeat. Just like a very junior employee. Minimize assumptions, give them key learnings, provide relevant documentation.


Based on this I asked for a detailed plan to implement Phase 1. This plan was pretty good also. Note that I’d been working for several hours with Claude but zero code had been written. That’s the point. We are talking about goals, we are setting up our environment, doing architecture and design work, but we are not writing code yet.

The other thing that I’ve done in my testing is to download best practice docs and ask Claude to compare our state to best practice. I then take that output and use it to stack rank epics to fix those issues. Here’s the plan Claude developed from the AWS Terraform best practice document:




Notice that the work items are assigned priority. So, I implemented the two P0s and put the P1 and P2 items into the backlog. Just like I would do normally in my PM role. By treating AI like a smart but inexperienced team member, I’m able to operate just like a PM with a larger team. Also, hilariously, Claude makes the same mistake a junior would make and assigns time estimates to each item. Those estimates are wildly off just like any estimate from a junior. That’s OK, I’ve worked with plenty of juniors in my career. Just a little coaching and we’ll be fine.

I have continued with this process and now I have a very nice setup based on Terraform running on AWS. It may not be the best designed application ever, but it has all of the components I would expect. It has a real CI/CD pipeline, it has automated regression testing, it has a proper backend and frontend, a rational scale plan, and security checks in the pipeline. All the things I would ask a junior team to look at before we went to production.

So, I think that this process works.

Is this vibe coding?

No idea.

Monday, October 13, 2025

Claude Code Micro Review


I’ve recently come across product managers (PMs) who are using Claude Code for PM tasks like document review.  This interested me because I assumed that Claud Code was optimized for, well, code.


It turns out that despite the name, Claude Code is just the CLI version of Claude for desktop.  It works a little differently than the desktop version and has some nifty features that the desktop version doesn’t have. It turns out that it’s pretty darn useful for doing PM work.  I find this fascinating because you would think that Anthropic would add those features to the desktop product.  Perhaps they’re working on it?


There are two key features that really interest me in Claude Code.  One is the /init feature.  As I’ve discussed before, context is king when it comes to AI.  Claude Code has a very nice feature that I’m shocked to find missing from most AI assistants.  When you use Claude Code and issue the /init command, what happens is that it scans the directory you’re in and creates a Claude.md file:



The Claude.md file is basically instructions for what this directory is and what you want Claude to do.  This way, you can set rules for Claude.  The original intent was that you would use Git to grab a repo, have Claude Code scan the repo and then set rules like “always use React” or “conform to coding best practice Y” or some such.  However, since the underlying Claude functionality is all there inside of Claude Code, you can use it for pretty much anything you like.


So, you could download all of your customer interview transcripts into a directory, have Claude Code read them and then ask questions about them.  Very cool functionality for a PM.  Since it sits on the Claude platform you can also use MCP to talk to external sources like Jira:



In this case, I connected it to Jira and asked it to review my current backlog and to create a new sprint focusing on the highest priority items.  I gave it some basic requirements like security and customer impact. and told it to make a new sprint based on those criteria.  It did a good job based on the criteria I gave it.


This type of stuff can consume endless hours of PM time so it’s interesting to me to see how much automation I can stuff into my toolchain.  I’m not really interested in taking on a brand new toolchain—I already know how to use GitHub and Jira—but I’d like my AI assistant to get in there and do some of the grunt work.  For me, this is key: AI should simply use the tools you already use and act as an extended member of your team.  When AI does that, you get huge benefits because you don’t need to retool, you don’t need to retrain everyone and you instantly get benefits from the AI assistant.


The second key feature is that it can make API calls for you or use MCP servers to access products like Jira or Slack.  I created an API token for my GitHub account and asked Claude Code to summarize the code changes to a project:



It was able to pretty accurately summarize the recent commits to this repo, something that’s super handy for PM to do, but not something we usually have time for.  In the past, I would bug my eng team to see if feature X or Y checked into main or whatever.  Now, I can just ask Claude to see if a PR got merged or if the test cases passed.  Very handy and it cuts down on the amount of engineering time I consume with dumb questions.


While I’m pretty comfortable working at a command prompt, it’s not where I normally do my work. I’m usually using things like Slack instead.


It turns out that there is also a Slack integration:


https://github.com/mpociot/claude-code-slack-bot


However, this implementation is Mac only.  Being a Windows user, I was a bit frustrated by this until I realized that I could just fix this myself using Claude Code.  So, I forked the project to my GitHub account and asked Claude Code to fix the problem.  After a bit of back and forth, I found that the original project had a couple of bugs which Claude Code also fixed.  It took me a couple of hours (mostly because I’m not really a developer) but I got it working.  I then asked Claude Code to write a PR so these changes could go upstream to the open source version:


This is not something I could have done on my own.  In fact, in all my years working on open source (my first open source project was OpenStack in 2010), this is the very first code I have ever attempted to upstream.  I have written PRs before but always for things like doc bugs or other non-code items.  


With the Slack integration working, Claude Code is simply another team member on Slack, doing things I ask.  Note that it is running on my Windows machine now:  



Of course, you can combine these actions.  Let’s say you wanted it to implement a specific Jira ticket.  In this case, I asked Claude Code to take a specific Jira ticket, look at the code base and write a plan about how that ticket could be implemented:



Now I can take this plan and discuss it with my engineering team.  Does this plan make sense?  Does it break things?  And so on.  


Of course, I can also have Claude Code just do the work.  After approving the plan, I had it open the PR directly via the GitHub integration:



Unsurprisingly, Claude Code is really good at creating a PR and making the change.  It talks to Git natively and correctly creates a branch, commits changes, and so on.  Since this is the original purpose of Claude Code, that’s not terribly surprising.


In summary, if you’re willing to put up with a command line interface, you can pretty easily build a custom AI assistant for your project that talks to Jira, Git, Slack or whatever you want and does work across those platforms.  Note that I didn’t write any code here, I just gave Claude Code instructions and it did the work.


IMPORTANT NOTE:  If you are using Claude Code or any other tool to access APIs, please be careful how you manage your API keys or other security information.  Do not hard code API keys into code and do not upload your passwords or keys into GitHub.  The safest way to do this is to use a secrets manager or store them for local use in an .env file or in an environment variable.




Here we go again

 


I recently attended an AI conference in San Francisco.  During the conference, several speakers cited this MIT report: State of AI in Business 2025.  They seemed shocked at the study’s headline finding that 95% of GenAI projects fail to show a financial return.


I wasn’t shocked.


Hopefully, if you read my blog regularly, you weren’t shocked either.


Some things don’t change.  Technology adoption in enterprise has some well worn footpaths and this is one of them.  When something like GenAI captures the public eye, there is pressure to simply adopt that thing.  Teams are told, “You need an AI strategy” or similar.  Often, the people giving that instruction have no idea about what AI really is or what it does.  Thus, the people receiving the instructions have very little context about the outcome that their leadership wants to achieve.  If your goal is to simply adopt AI, then you can do so.  It’s not hard to bring a chatbot or other LLM-based tooling into your organization.


What the MIT study asked was about value.  Did these companies achieve positive ROI from these investments?  Well, no.  Probably because the teams involved were not told to do that.  They were simply told to go get AI. Which they did.


As I discussed in my previous blog post, Why You Need an AI Business Plan, AI, just like any other technology you bring into your organization, needs a business plan to go with it.  The “why” question is the critical question to address before you begin the project along with what and how.  Why are you adopting AI?  What benefits do you expect your company to experience?  How will you know when those benefits have been achieved?


Without answering these questions, you’re just pouring money down the drain. 


Here are six things you need to do BEFORE you agree to adopt a technology:


  1. Define the key stakeholder.  Who will benefit most from this project?  Are you doing this to save money?  Then finance (for example, the CFO) is a likely sponsor.

  2. Define the outcomes.  After this project is done, what is different?  This is often the focus of PR/FAQs, but you can use any format you want.  The point is to put down on paper what happens if we do this project.

  3. Define the measures.  If you know the positive benefits you’ll get, how will you measure this?  You’ll need to take a baseline first or you won’t know if your project moved the needle or not.  Are you currently measuring this?  If not, start right now.

  4. Working backwards from measures, design the project.  Only after you know the result you are trying to achieve, do a high-level architecture and solution concept document.  “We are going to build X to achieve Y” is always how you want to start any design.

  5. Iterate.  Break the project down into small discrete steps.  Make sure that you are achieving the claimed benefit early in the process.  Don’t wait for a magical “big bang” at the end.  You should be making progress at every sprint.  You should be measuring this progress.  

  6. Pivot.  You were probably wrong in steps 1-5, above.  Evaluate your mistakes, adjust, replan.  You won’t really know what the heck you are doing until you do it.  So, take small steps and evaluate your performance as you go.  You’ll be wrong.  That’s OK.  Adjust.


The really big thing here is to expect and embrace failure.  In many organizations the team is not rewarded for declaring failure.  That’s a very unhealthy way to manage.  As an example, if you ask a team to try to reduce cost by 15% and they come back six weeks later and say, “Hey, that goal isn’t possible. We should cancel this project and do something else,” that’s a positive result.  They just figured out that this thing won’t work.  However, if you punish them for failure, they’ll keep working on it for months or years, only to fail after you’ve spent untold amounts trying to build something that cannot work.


Think about this another way.  If you are a manager, how many people have you promoted for killing a major project?  How many have you promoted for delivering the impossible?  My guess is none and tons, respectively.  This means you are rewarding hero culture.  Instead of encouraging people to be clear eyed and dispassionate, you encourage them to take risky bets because they have personal career development goals on the line.  Is it surprising that failure is the most likely outcome if you have a culture like that?  It shouldn’t be.  Yes, the team should be willing to take risks.  No they shouldn’t force a bad hand because they’re afraid to tell you the truth. 


Like I said, this isn’t new.  I’ve experienced similar bubbles in my career a couple of times before.  During the .com boom in the late ‘90s and early 2000s, companies rushed to get on the internet with zero idea about what the internet was for.  Later, cloud adoption mandates drove all kinds of low-value projects.  This is what prompted me to write my book, Why We Fail which discusses enterprise cloud adoption and the issues just mentioned.


Perhaps I should come out with a second edition about AI adoption.  I could call it “Here We Go Again.”



Monday, October 6, 2025

The Time Trap

 


Your time estimates are wrong.  Don’t try to fix them.


Yeah, I said that.


I have been part of millions of planning meetings. I have seen thousands of project plans. The time estimates in them are all wrong.


All of them.


Instead of spending time coming up with a very precise estimate for how long it will take to build a feature, spend about five minutes working with engineering to figure out if the feature is "Large", "Medium" or "Small" (otherwise known as T-shirt sizing) Define those terms by talking about the number of sprints required to get them done.   You’re still wrong, but you didn’t waste days or weeks coming up with the wrong estimate. 


For example:


Small = 1-2 sprints

Medium = 3-9 sprints

Large = 10 or more sprints


Once you accept that the estimate is wrong, you can move on to planning with uncertainty.  When you stack rank, you know that "Small" things at the top of the stack will likely ship this quarter but not definitely. Similarly, "Large" items at the bottom of the stack aren't shipping any time soon. This drives your roadmap. Estimate by quarter how much work you will get in those chunks based on stack rank.  Of course, smaller chunks are better than larger chunks.  If possible, take a look at the large items and see if you can break them down further.  The smaller the chunk, the more accurate your estimate will be.


Is your estimate correct? No, it is not.


If you spend weeks developing a time estimate, will it be correct? No, it will not be correct.


Is it possible to be 100% correct in your estimate? No.


So, instead of investing a lot of time, take a quick stab at it, base your estimates on some objective reality and move on. It's wrong, but that's OK. The broad estimates can still give you an idea of how many things on the roadmap will be accomplished in a quarter.  The reality is that you are way better off just accepting this level of uncertainty than trying to get to a “correct” answer that you likely will never find.  For a PM, this discussion is usually driven by roadmap.  Your execs want to know when a certain feature will ship.  Don’t make the mistake of committing your product team to a specific ship date for a feature under development.  PM doesn’t commit to shipping deadlines.  Only engineering can ship a feature.  Let them answer the “when” question.


What you want to do is be very transparent about your stack rank and the size and complexity of features. Small features with low risk and high priority should show up soon. If they don't, there is something wrong. Focus on that.  This type of discussion can inform your leadership of what you’re working on and why and give them a view of where the effort and risk is. That’s what they really need to know.


What you should be presenting instead:


  1. A good stack rank. Is eng executing against the stack rank? If not, stop. Until you agree with eng on stack rank, nothing else matters.

  2. A good view of complexity. Are "easy" features really easy? If eng says "piece of cake" do you get the feature quickly? If not, stop. If you don't have a good view of complexity and risk, you cannot estimate with any accuracy.

  3. A VERY VERY rough idea of time. It's way more important to know that X comes before Y. If you say X is six weeks out and it ships in eight weeks, does it matter? Not really. If you tell sales that Y is "THE MOST IMPORTANT THING ON OUR ROADMAP" but you don't ship it? You have a problem.


If you have those three things, you can build a roadmap and get leadership buy-in. 


I’ve said this before, but it bears repeating.  There are really only two things that distinguish a good product team from a poor product team.  Those are quality and velocity.  If you are executing with a high degree of quality at a high velocity, your odds of success are much higher than if you are operating with low quality and/or low velocity.  As a product management leader, quality and velocity should be the two things you focus on the most.  What you should not do is try to manage velocity with deadlines.  Artificially setting dates simply lowers quality and doesn’t really address the underlying velocity problem you’re trying to solve.  



Do You Have a Solution in Search of a Problem?

 


Do You Have a Solution In Search of a Problem?


When working with product teams, I often find that they’re working in one of these modes:


They have a problem and are looking for a solution.


Or


They have a solution and are looking for a problem.


As product managers (PMs) we are supposed to be focused on customers and their problems.  However, it’s not uncommon to find product teams that start with a solution and then look for problems that their solution solves.


That’s not a great place to be.


First of all, if you already have a solution in mind, that drives the discussion.  There is a reason why “if all you have is a hammer, the whole world looks like a nail” is such a common phrase.  It’s just human nature to try and make what you have work.  In product management circles, we tend to use the term confirmation bias but it’s the same idea.  If I go into a customer meeting with something specific to sell, I can usually find reasons why the customer needs that thing.  Which is fine, if you’re in sales.


However, if you’re trying to decide what to build, that’s the worst thing possible.  All you’re doing is selling the thing you already have.  That may or may not mean that you have the thing the customer really needs.


On the other hand, if you walk into the meeting looking for problems, you are now open to whatever the customer has to tell you.  I have spent the majority of my career working on enterprise-class software, so other areas may be different, but I can tell you that enterprise customers are more than happy to tell you their problems.  However, most enterprise software companies are not really listening.  What you see all the time is product teams trying to sell what they have on the truck instead of trying to figure out what the customer really needs.


And that’s a shame.


Here are some symptoms of solutions looking for problems:


  1. Engineering builds a prototype that you didn’t ask for.  It’s pretty common for high-performing engineering teams to come to you with great ideas.  Sometimes those ideas are truly amazing and you learn things.  However, most times, these solutions aren’t actually aligned to customer requirements.  Now you have an amazing idea and you go around trying to figure out how to use it.

  2. Leadership “vision” is divorced from customer reality.  If senior leadership is driving the roadmap, it’s common for the roadmap to  become divorced from customer requirements.  Good leadership will listen to the field and adjust based on what they hear.  Yes, you need to have a strong product vision. No, you should not ignore what customers are telling you.

  3. New ideas for customer problems to be solved are automatically quashed.  Yes, I admit I have done this.  If you are leading a product team with very tough goals, you have to remain focused.  Sometimes that means quashing or ignoring customer requirements.  All I can say is, know you’re doing this and do it on purpose if you need to do it.  You can get away with this for a while, but there are limits.

  4. PMs are not meeting directly with customers.  Several times in my career I’ve been told by sales that I don’t need to talk to the customer—that sales will tell me what the customer wants.  With all due respect, that’s not acceptable.  PM needs to speak to customers DIRECTLY and regularly.  Without direct contact, PM will make poor decisions.


So, what is a PM leader to do about all this?  How can you shift gears and move away from solutions and towards problems?


  1. Prioritize quality customer time.  I cannot emphasize enough how vital this is.  Unless you are talking directly to customers in relatively unstructured settings, you simply do not know what they want.  Spend the time.  Focus on their needs.  Ask questions.

  2. Put customers first.  When meeting with customers, always make time to discuss their issues and ask about their problems..  I have been in hundreds of customer meetings where I am expected to go over the roadmap in detail in just thirty minutes.  That just isn’t enough time.  I need at least fifteen minutes to ask questions.  Therefore, my minimum customer meeting is an hour.  More is better.  

  3. Focus on why.  When sales (or anyone else) says that customers want X, always ask “why do they want X?”  You need to dig down into the underlying customer needs to really understand the requirements and develop a solution to the actual problem.

  4. Have strong opinions, loosely held.  This phrase always stands out in my mind as the definition of PM.  As a PM I have to be very firm in my opinion.  I have to have a reason for it.  I must be able to defend it.  On the other hand, I have to be willing to abandon my position given evidence that it’s wrong.  I can’t fall in love with my own plan.


As usual, when we get off track it’s because we forget why we are here.  As a PM team, our only job is to make the product better by serving the customer.  We don’t do that through surface interaction.  When I joined Splunk many years ago, I adopted a huge (1000+ user stories) backlog of “customer requirements” that the team had been working on.   The first thing I did is insist that I meet directly with customers who had these problems.  Every time I talked to one of these customers, I learned something.  The product got better as a result.  The PM is not a feature vending machine where you put requirements in and get features out.  The only true measure of a successful product is customer adoption.  If the customer likes and uses the feature, it’s a good feature, full stop.