Tuesday, November 18, 2025

Yes, Virginia, this is what a bubble feels like


I’ve been reading with some interest the debate about the AI bubble, mostly written by folks who didn’t participate in previous technology bubbles.  I have the (good?) fortune to have participated in three of these during my career.   The most extreme that I have personally experienced was the dot-com bubble.  At that time, it felt like the entire world had changed.  It was now possible for a software company to compete directly with “real” companies like Sears or General Motors.  This was INSANE at the time.  Yes, software was a big market. No, we didn’t go after Fortune 500 companies on a regular basis.  Suddenly, we could.


So, let’s talk about what a bubble actually is.  According to Wikipedia:


An economic bubble (also called a speculative bubble or a financial bubble) is a period when current asset prices greatly exceed their intrinsic valuation, being the valuation that the underlying long-term fundamentals justify. Bubbles can be caused by overly optimistic projections about the scale and sustainability of growth (e.g. dot-com bubble), and/or by the belief that intrinsic valuation is no longer relevant when making an investment (e.g. Tulip mania).


https://en.wikipedia.org/wiki/Economic_bubble


As an aside, I was quite pleased to discover Tulip Mania when I read this article on Wikipedia.


Getting back to our point here.  The definition of a bubble is a financial one.  In the business we tend to talk about things like markets or wild product claims, but those are just external symptoms.  The actual definition is all about valuations vs. reality.  During the dot-com bubble for example, Commerce One had a valuation around 210x.  They had basically no revenue but their valuation was sky high.  They eventually folded, declaring bankruptcy in 2001.  I had a few friends who worked there and they told me I was dumb for not moving over.  I was at Microsoft at the time.  It’s crazy when someone tells you to leave Microsoft for financial reasons. 


So, are we in an AI bubble?


Well, at 42x and 27x respectively, OpenAI and Nvidia certainly seem a bit high, but not as high as CommerceOne.  Of course, they are WAY WAY WAY higher than “regular” companies like Adobe at 6x.  So, it does feel a tad high.


OpenAI and Nvidia are often cited as examples, but they are nowhere near the top of the list. Perplexity is at 180x and Anthropic is at 61x.  Of course, OpenAI, Perplexity and Anthropic are not public companies.  They are startups so by definition they are going to be in a mode where they invest and focus on growth and customer acquisition rather than revenue.


Naturally, VC funding reflects this also. In the USA, AI companies captured about 35-40% of the total VC funding in Q3 2024 which is quite a jump from 14% in 2020.  This tells us that VCs think that this market is hot and the valuations for pre-IPO companies reflect this.


I’m not an economist, but the data seems to be a bit hit or miss.  Yes, there is significant growth in valuations and yes, they look a bit high, but not quite as high as the height of the dot-com bubble.


However, I can certainly tell you from experience that it FEELS like a bubble.


What does a bubble feel like?

  • Companies get funded for basically nothing.  No real business model, no customers, no revenue.

  • Money is spent like water.  Hiring goes insane.  Salaries explode.

  • Everyone is suddenly in that business even if they’re not really.

  • “This changes everything” mentality.  All previous knowledge is suddenly obsolete.


Are these things happening? Let’s look at the evidence.

Funding

During the dot-com boom, you could get VC money for almost anything.  It was literally insane.  A friend of mine started an online sock business.  Yes, a website where you could buy socks.  He got millions.  Of course, they failed.  But, Amazon is now a thing.  To put it another way, real e-commerce companies came out of the bubble and if you had invested in Amazon you would have done quite well.


Today?  Take the example of Safe Superintelligence (SSI) which has a $32 billion valuation and $0 revenue. Founded by ex-OpenAI chief scientist Ilya Sutskever in June 2024, SSI has raised $3 billion total and reached a $32 billion valuation with approximately 20 employees, zero products, and zero revenue. Sutskever has explicitly stated the company won't release anything until they achieve "safe superintelligence" at some unspecified future date. The valuation jumped sixfold in less than a year, from $5 billion to $32 billion. It's literally betting billions on a technology that might not exist for decades.

Spend

During the dot-com boom, companies did all kinds of crazy things.  They would send the entire company to some tropical island.  They would offer free services to employees like massages or a concierge.   Most offered free food.  Hilariously, this is a Google thing now but at the time it was a crazy excessive perk.


Today, OpenAI spent about $9 billion in order to lose $5 billion this year.  Their spend on infrastructure is truly insane.  They have also signed San Francisco’s largest office lease two years in a row and occupy nearly one million square feet of office space in San Francisco. Meta recently paid $100m to a single AI scientist.  


The Cool Kids


During the dot-com bubble, it suddenly became popular to have a .com address.  The web went from an interesting nerd hangout to the center of the business world.  Literally everyone needed a website even if they had no idea what that was.  Things like Myspace were huge for a short time, but quickly faded.  Of course, we also got Facebook which is still here.


Today, it’s hard to find a software company that ISN’T claiming to be an AI company.  Cisco, Oracle and Intel are all claiming to be AI companies now:


Oracle: What is Oracle AI?

Cisco: Cisco AI Solutions

Intel: AI Ready Datacenters 

The New Paradigm

When dot-com happened, the general feeling was that this was the most amazing thing ever:


In the Web's first generation, Tim Berners-Lee launched the Uniform Resource Locator (URL), Hypertext Transfer Protocol (HTTP), and HTML standards with prototype Unix-based servers and browsers. A few people noticed that the Web might be better than Gopher. In the second generation, Marc Andreessen and Eric Bina developed NCSA Mosaic at the University of Illinois. Several million then suddenly noticed that the Web might be better than sex.

— Bob Metcalfe, InfoWorld, August 21, 1995, Vol. 17, Issue 34.[11]


As it turns out, that wasn’t completely wrong.  Ten years after the bubble burst, Marc Andreesen wrote the famous Why Software Is Eating the World essay.  That was 2011.  Today in 2025, we still have brick-and-mortar businesses but we also have things like AirBnB, Uber and Spotify which are software-only companies disrupting very traditional brick-and-mortar businesses effectively (hotels, taxicabs and music stores, respectively).


Today, everyone is pretty convinced that AI will change everything:


“AI is going to reshape every industry and every job.” 

– Reid Hoffman, Co-founder of LinkedIn


Of course, this isn’t a new thing either.  The economist John Maynard Keynes famously predicted a fifteen-hour work week by the early 2000s in a 1930 essay


So, yes.  I can say that all of this feels eerily familiar.  Some of the more insane claims are almost word-for-word things I’ve heard before including the idea that we won’t need to work.  Of course, that claim was made in 1930 about the year 2000.  Twenty five years ago.


I’m still waiting.  




Thursday, November 13, 2025

What do Product Managers and Toddlers Have in Common?



Besides being hard on the furniture, we both ask “Why?” every six seconds.  As a product manager (PM), the thing we really need to own is the “why” question.  While we really care about the “what” and the “how”, we need to focus on the “why”.


In any product organization, there is always more work that you would like to do than you can do.  Thus, we need to prioritize.  Traditionally, PM works on the stack rank and develops a roadmap.  We then work with engineering to bring that roadmap to fruition.


The problem, of course, is that everyone in the org has their own ideas about what you should do.  Thus, anything you propose is subject to question and debate.  As you get into the details of this discussion, it is easy to fall back to opinion-based debates.  You should build feature X because your boss wants you to build it or Sales wants you to build it.  Or Engineering has come up with an amazing new prototype that looks really nifty.


So, what should you do?


Well, you shouldn’t do any of those things.


You instead should ask why we are here?  What are our goals?  How do we achieve or not achieve those goals?


In short, you need to have the “why” question totally and completely nailed.


If you understand why you are building feature X, you are way ahead of the game.  Now when you get into a discussion with Engineering or Sales or senior leadership, you start off with “We are doing X because of Y.  We believe that this is the correct course of action because of Z.”


I cannot tell you how often people have presented information to me as fact when it’s actually just their opinion.  I’ve been told, “Customers want feature X,” for example, so many times that I simply cannot count them.  When presented with this type of information you can simply accept it as true or you can dig in.  I can assure you that many of these claims do not stand up to any sort of scrutiny.  


Example conversation I’ve actually had (details redacted to protect the guilty):


Senior Leader:  “We know that customers want us to do X”

Me:  “Sorry, how do we know that?”

SL:  “They’ve told us that.”

Me:  “Why do they want us to do that?”

SL:  “It doesn’t matter, we need to do it.”

Me:  “Sorry, but the details really matter if we are going to design this properly.  Can you point me to a couple of customers you’ve talked to about this?”

SL:  “Talk to person Z, they know all about this.”



Me:  “Hello Z, SL, told me to talk to you about X.”

Z:  “OK, you really need to build X.”

Me:  “Can you point me to a specific customer who wants X?  I need to talk to them directly.”

Z:  “Why do you need that?  I can tell you they want it.”

Me:  “Details matter; I need to gather requirements.”

Z:  “That seems like a waste of time. We know what they want.”

Me:  “I don’t.  Can you give me some pointers please?”

Z: “I’ll get back to you.”


Spoiler alert: They didn’t get back to me.  


The point isn’t to bitch about leadership or whatever.  The point is that even if someone really truly believes that you need to build something, it’s still up to PM to do our jobs and figure out what the customer needs.  To figure out the “why” question.  To get into the details about what happens if we don’t build it or if we build something else that solves the same problem.


I know that my tendency to ask “Why?” and to always go back to source data drives some of my colleagues crazy.  But I do it anyway.  Some have told me that it feels like I don’t trust them.  That’s unfortunate because that’s not my intent.  Just like the telephone game, information does not travel through organizations freely.  There is always some sort of loss of detail or incorrect reporting. It’s just human nature.  We are not perfect.  So, as a PM I want the source data whenever I can get it.  The closer I am to the customer, the more likely I will make the correct decision.


And here is the lesson: If you are a junior product manager, don’t take “no” for an answer.  Get the source data.  Talk to the customer directly.  Ask “Why?”  If you are prevented from doing this, go to your leadership.  Be polite, but firm.  You must have this to be successful.  If your leadership doesn’t agree, you might not be in a healthy product organization.  



ChatGPT Agent Mode Micro Review



As I continue my research into AI tools for product management, one tool that really stands out for its simplicity is ChatGPT’s agent mode.  There are a couple of different tools that work in a similar way, but the current ChatGPT agent mode is probably the simplest of them all and is very useful for product managers.


After you log in to ChatGPT, before you issue a prompt, you can hit the little plus sign and select Agent mode (assuming you have the correct type of account).  In this case, I asked it to do some sentiment analysis on Reddit for me:


And the result:


Which, honestly, is pretty good.  I wouldn’t say it’s world class, but it’s similar to what I would expect a junior PM to be able to do.  It also saved me hours of combing through dozens of posts and allowed me to check source data quickly.


This is a general theme with the current crop of AI tooling: They’re really good at summarization.  


It’s important to note here that you’re essentially asking ChatGPT to scrape the website.  That means that if you have to log in to the site or scrape confidential customer data, you’re essentially handing your login and customer data to OpenAI.  Your call if you want to do that.


For me, I find that using agent mode allows me to use AI very quickly to use systems that I don’t know anything about and don’t want to know about.  Yes, you could use the site’s API and write a script to do this or you could use MCP to do this, but with agent mode, you just ask ChatGPT and it does the work.  Zero effort on your part.  Very handy for doing research against public websites specifically.  If you wanted to run a specific automation every day and make it an essential part of your work, you may want to spend some effort to clean it up and make it more reliable but this technique gets you data quickly for pretty much any site on the internet.  


Here are some use cases that I’ve used this for:


  1. Sentiment analysis.  This is what I did in the example I just described.  Use ChatGPT to read social media and look for trends.

  2. Instructions and documentation.  There are several products I use that have very dense manuals.  Those manuals are online.  You can ask ChatGPT to read the manual and then you can ask it questions.  “How do I turn off this notification?” is now summarized for me and I don’t have to wade through three hundred pages of text.

  3. UX analysis.  This one isn’t super obvious, but if you ask agent to use a product website, you can then ask it to critique that site.  Since it has access to the running UI, it knows how the product actually works.   So, you can use a prompt like, “As a professional UX designer, what are your top ten suggestions about how to improve the UX for http://my.site.com?”

  4. Competitive analysis.  You can also point it at your competitors.  “Use http://my.competitor.com and tell me how it’s better than http://my.site.com” or even “Log in to http://my.competitor.com and tell me about the new features.  Summarize how those new features work.”  Etc.  


Of course, if you just want extended search, like “Tell me who the most prolific and respected voices about product management are and summarize their top advice for product managers” you can use something like Perplexity:


I tend to use Perplexity when I need deeper search and summarization from the web but I don’t have a specific site in mind.


Monday, November 3, 2025

Agile and the age of AI

 


Someone asked me yesterday if we can still use agile methods in the age of AI.


My answer was an unequivocal yes.


But then I thought about it a bit.  The reality is that I’ve never worked in a “pure” agile shop.  I’ve run into folks who are truly experts in this area and when I listen to them I realize that most places I’ve worked have adjusted the “pure” agile model around to make it work for them.  It’s to the point that I don’t really know what agile stands for any more.


As a long time product manager for a variety of SaaS applications, I know what a “good” SaaS product looks like.  I’ve worked with very strong engineering teams and I’ve worked with very weak engineering teams.  I’ll say this again for those who are new to my blog series: the only differences between good teams and bad teams are quality and velocity.  Good teams ship with high quality at high velocity.  Thus the real test here is, “Can AI ship high-quality code at a high velocity?”


We know that AI can ship with high velocity.  We’ve all seen the demo.  How do we help AI ship with high quality?


Perhaps not surprisingly, all the things we’ve learned about running high-quality SaaS sites are still true.  Shocker.  Do these apply to AI?  As I discussed in a previous blog post, you can and should set up your AI tooling to use best practices and you should use a similar "begin with the end in mind” strategy as you do with regular software development.


Most modern SaaS teams are running CI/CD (Continuous Integration/Continuous Delivery). While not strictly speaking part of agile methodology, it is something that fits into the larger agile mindset of moving quickly and shipping interim builds.  Interestingly, CI/CD is not something that most AI coding tools support.  If you use a tool like Lovable or V0, you will simply get running code.  This is interesting, but running code is not a product.  SaaS applications are living things.  They change regularly.  This means that you need some way to inject code into your site on a regular basis without breaking things.


Which leads us to…


Testing.


A well-built site has very strong test suites that prevent “regressions” which is what we call a thing that used to work but doesn’t work any more.  If you’ve never worked on a SaaS product, you would be surprised to find out that it is super common for things to just magically break that worked perfectly days, weeks or months before.  Thus, your test suite.  What really confuses me is when I read folks online complaining that their AI tool made some mistake.  That the tool created a new bug or did something else wrong.  Why would I be surprised that AI coding tools create bugs?  Real programmers do this all the time.  It’s the reason why we have things like commit checks and automated testing—to catch these inevitable errors that occur. 


AI, if anything, is even worse about regressions, it has a limited context window so it forgets from one session to the next.  An AI programmer will simply do what you tell it.  If you tell it to fix a bug, that doesn’t imply to their robot brain, “Fix this bug without introducing new bugs.”  No, it just fixes the bug in the most expeditious way possible.  Even if you say things like “do not introduce regressions” in your prompt, it will anyway out of ignorance.  This means that you need a super strong test path that ensures that your coding AI isn’t breaking things every time it makes a change.  Again, most AI coding tools do not work this way.


I would argue that if you don’t have a way to safely ship code that has been carefully and thoroughly tested, you’re in trouble.  This means that any AI product team is going to have issues here unless they address them right up front.


Thankfully, the industry has been working on this problem for years.  There are tons of tools out there expressly designed to help you solve this problem.  In the following discussion, I’ll show examples of what I am using in my personal development projects.  This in no way implies that this is the ONLY way or even the CORRECT way.  This is just the way I’ve used, and it seems to be working for me at this point.


Phase 1:  Begin with the end in mind.


When I begin a new project.  I always start the project with a PRD.  This may seem odd because PRDs are often thought of as “anti-agile” but I’m a big fan of long-form communications.  When I worked at Hashicorp we always used PRDs because we were a remote-first company (this was before COVID when that was rare).  When I am working with Claude Code for example, I usually start with an empty directory and a PRD.  That’s it.


Phase 2: Start from the bottom.


One thing that I see “vibe coding” type solutions do all the time is to build the UI first.  I get why they do that; it’s the thing the user wants to see.  But the problem is that a properly built SaaS application sits on a complex platform.  If that platform isn’t right, you will have all kinds of trouble later.  So, start with the framework and then add features to this framework.  Try a prompt like this:


I would like an architectural framework for building my SaaS business as described in the PRD.  This SaaS application will run on AWS and should make use of appropriate AWS PaaS services.  It will be deployed via Terraform and use GitHub Actions for CI/CD.  Propose a full tech stack for our MVP.  Reference the PRD in the docs and put your recommended architecture in a file called architecture.md in the docs folder.  Keep it simple, this is just the MVP and we can add advanced features later.


This prompt caused Claude Code to build me an architecture plan that I could then review and edit.  I made some changes, but the plan was pretty decent.  Notice that I’m an opinionated consumer.  I know Terraform because I used to work at Hashicorp.  I’ve also been working with AWS for over 15 years so I’ll want to host my application there.  I chose GitHub Actions because it’s the easiest way for me to have a full CI/CD platform.  You can make different choices, but the point is that you need to make choices.  These choices will dictate your ability to ship features later, so they do matter.


Phase 3:  Make the rules.


After I had those two documents (the PRD and the architecture plan) I ran Claude /init in that directory.  Claude read those two documents and realized that I was very early in a software project and populated the Claude.md file appropriately.  Again, I had to review this file and update it.  There were several things I wanted the project to do and most importantly several things that I DIDN’T want the project to do.  So, you do want to read that file carefully.


Phase 4: Testing framework.


Before you allow the system to write any code, you need to have a testing framework.  Because AI tools tend to just make stuff up as they go, you really have no idea what they’re going to do.  Thus, you really need to have some sort of testing in place that keeps them on track.  Especially if you’re not a full-time developer, you won’t be able to just read the code and tell the system the code is OK or not.  In my case, I haven’t written code professionally for over twenty years so I’m not really qualified to review the AI’s code.  Again, testing.


Phase 5: Planning


One handy pattern that I use with Claude is to ask it to plan first, and then work.  This means that when you are about to do something major like creating the basic framework for your application, you want Claude to carefully plan it out first.  As I did for the architecture plan, I asked Claude to create a comprehensive step-by-step plan of how to build the application framework.  Then I took that plan, reviewed it and broke it down still further.  I took each phase of the plan and again asked Claude to give me a detailed plan for that section of the plan.  This iterative planning process seems to provide a better result for me than just doing a one shot.  Remember that Claude and other tools have a limited context window.  That means that smaller tasks are more likely to be completed successfully.  For my convenience, I usually ask Claude to write the plan into a MD file and I keep all those plans.  Then, if something goes wrong later, I can say, “Open planning document X.  Compare current state to that document.  What is wrong with the current implementation?”   This forces Claude to think about what was supposed to happen and then reflect back on what the current state is.


Phase 6:  Best Practice


After Claude has built a working prototype, you need to figure out if this thing is any good.  If you’re not an expert in things like Terraform, it may be hard to figure out if the implementation is decent or not.  One trick is to find high-quality best practice documents.  For example, AWS has a great Terraform best practice document.  I took that document, downloaded the PDF and put it into my docs directory.  I then asked Claude to read that document and compare our implementation to that.  It came back with some very concrete things we could be doing better.  


Phase 7: Iterate


As you learn, you loop back up into planning mode above.  You’ve probably made mistakes and the AI has certainly made mistakes.  Just like any project I’ve worked on, AI tools require me to iterate frequently to refine and repair what has been done.


At this point, you have a real software project.  It’s not ready for prime time yet, but you have a basic structure that allows you to create features, test them and push them into production.  Your AI software project is now in a better place than half the software teams I’ve worked with in the past.  Congratulations.