I recently attended an AI conference in San Francisco. During the conference, several speakers cited this MIT report: State of AI in Business 2025. They seemed shocked at the study’s headline finding that 95% of GenAI projects fail to show a financial return.
I wasn’t shocked.
Hopefully, if you read my blog regularly, you weren’t shocked either.
Some things don’t change. Technology adoption in enterprise has some well worn footpaths and this is one of them. When something like GenAI captures the public eye, there is pressure to simply adopt that thing. Teams are told, “You need an AI strategy” or similar. Often, the people giving that instruction have no idea about what AI really is or what it does. Thus, the people receiving the instructions have very little context about the outcome that their leadership wants to achieve. If your goal is to simply adopt AI, then you can do so. It’s not hard to bring a chatbot or other LLM-based tooling into your organization.
What the MIT study asked was about value. Did these companies achieve positive ROI from these investments? Well, no. Probably because the teams involved were not told to do that. They were simply told to go get AI. Which they did.
As I discussed in my previous blog post, Why You Need an AI Business Plan, AI, just like any other technology you bring into your organization, needs a business plan to go with it. The “why” question is the critical question to address before you begin the project along with what and how. Why are you adopting AI? What benefits do you expect your company to experience? How will you know when those benefits have been achieved?
Without answering these questions, you’re just pouring money down the drain.
Here are six things you need to do BEFORE you agree to adopt a technology:
Define the key stakeholder. Who will benefit most from this project? Are you doing this to save money? Then finance (for example, the CFO) is a likely sponsor.
Define the outcomes. After this project is done, what is different? This is often the focus of PR/FAQs, but you can use any format you want. The point is to put down on paper what happens if we do this project.
Define the measures. If you know the positive benefits you’ll get, how will you measure this? You’ll need to take a baseline first or you won’t know if your project moved the needle or not. Are you currently measuring this? If not, start right now.
Working backwards from measures, design the project. Only after you know the result you are trying to achieve, do a high-level architecture and solution concept document. “We are going to build X to achieve Y” is always how you want to start any design.
Iterate. Break the project down into small discrete steps. Make sure that you are achieving the claimed benefit early in the process. Don’t wait for a magical “big bang” at the end. You should be making progress at every sprint. You should be measuring this progress.
Pivot. You were probably wrong in steps 1-5, above. Evaluate your mistakes, adjust, replan. You won’t really know what the heck you are doing until you do it. So, take small steps and evaluate your performance as you go. You’ll be wrong. That’s OK. Adjust.
The really big thing here is to expect and embrace failure. In many organizations the team is not rewarded for declaring failure. That’s a very unhealthy way to manage. As an example, if you ask a team to try to reduce cost by 15% and they come back six weeks later and say, “Hey, that goal isn’t possible. We should cancel this project and do something else,” that’s a positive result. They just figured out that this thing won’t work. However, if you punish them for failure, they’ll keep working on it for months or years, only to fail after you’ve spent untold amounts trying to build something that cannot work.
Think about this another way. If you are a manager, how many people have you promoted for killing a major project? How many have you promoted for delivering the impossible? My guess is none and tons, respectively. This means you are rewarding hero culture. Instead of encouraging people to be clear eyed and dispassionate, you encourage them to take risky bets because they have personal career development goals on the line. Is it surprising that failure is the most likely outcome if you have a culture like that? It shouldn’t be. Yes, the team should be willing to take risks. No they shouldn’t force a bad hand because they’re afraid to tell you the truth.
Like I said, this isn’t new. I’ve experienced similar bubbles in my career a couple of times before. During the .com boom in the late ‘90s and early 2000s, companies rushed to get on the internet with zero idea about what the internet was for. Later, cloud adoption mandates drove all kinds of low-value projects. This is what prompted me to write my book, Why We Fail which discusses enterprise cloud adoption and the issues just mentioned.
Perhaps I should come out with a second edition about AI adoption. I could call it “Here We Go Again.”
No comments:
Post a Comment