Moats, Models, and the AI Stress Test

AI is fundamentally changing the landscape of software development.  There are risks to the business models and monetization structures behind the companies building the AI superstructures.  There are also fundamental changes to the way we build software and that increased speed of delivery shifting the conversations engineering, product, and the business need to have (which I will put in a pin in and probably take up in a different post).  In this post, I want to talk abouts the impacts to existing software offerings and companies, regardless of whether they start to use AI tooling or AI-first approaches.

Killing SaaS

There are a thousand think pieces out there about how AI will be the death of SaaS.  While I agree with some of their premises, I suspect both the causes and effects are overblown in part for social media consumption.  I think we are also mixing up changing barriers to entry, shifting interaction patterns, business model valuation reevaluation, and software structures needs; pointing at them and call it all “SaaS”.  You’ve come to my blog so you should know, by now, you are up for a history lesson.

We use the term “SaaS” synonymously for both a software delivery method and a pricing model.  But they are very different things.  The software delivery method was a key innovation that enabled the pricing model, but you can have software delivered over the internet without pricing it as a per seat “rental software” model.  So, let’s talk about them separately.

What is SaaS?

The software delivery method is simply a shift from providing software via an installation on a computer to accessed through a web browser.  This was a seismic change in the power of applications, who was bearing the cost of processing, where the data was owned and stored, and the “ownership” of that software.  I used to buy Microsoft Word on a flippy disk and install it on my Intel 486 computer with a 32-bit CPU named “Bride of Frankenstein” (because my brother built it from several other computers).  I could, technically, still find that old disk and attempt to install it on my modern laptop and still have the rights to do it.  That’s because when I saved up my allowance to buy my very first word processor, I was sold a perpetual license to that software represented by that disk and the fancy embossed documents that came with it.  Microsoft built the software and then pressed a certain number of copies of it. 

Starting in the 1960s, folks in computer science started to experiment with the concept of “time-sharing” e.g. allowing multiple users to have access to the mainframe computer remotely.  In the late 1990s, the internet arises, first with the World Wide Web in 1989, the first browser in late 1990, the first webpage (Berners-Lee’s info.cern.ch, a guide to what the WWW was and how to use it) followed by CGI (Common Gateway Interface, a set of rules for how web servers interface with external programs) in 1993.  By 1995, we have JavaScript so developers can add dynamic elements beyond text including input validation and animations (which has been the bane of every developer since) and by 1998 XML which allowed data to be requested from a server in the background (which later became AJAX, the second bane of every developer since).  Also, in the 90s we get the emergency of ASPs (application service providers, the ability for a vendor to host the software) which leads to the 1998 introduction of NetSuite, the first cloud-based business management application.  This means by the end of the 1990s we had the makings of “Web 2.0”:  the interactive internet.

In 1999, Salesforce.com is founded, often credited with being the first “pure” SaaS.  Here’s the thing about Salesforce.  It was (probably) the first web application that was built first as a web application, as opposed to Microsoft Word that started as a desktop application and later transitioned to a hybrid model (desktop + web based).  There was never an “on-prem” version of Salesforce.  It was always designed to only be cloud-based specifically to compete against on-premises competitors.  

What in SaaS is at Risk

There are two things this fundamentally changed. The first was where customers went to engage with their software.  Instead of installing a piece of software on their desktop, customers were accessing the software through the internet.  This increased the risks associated with poor internet access and the costs of personal computers (my 32-bit CPU Bride of Frankenstein wouldn’t even be able to turn on today.  A single tab of LinkedIn is over 1 gig of RAM (someone else’s very opinioned take on this)).  Over the last two decades, there has been an interesting shift back to having a “desktop install” of web-based software.  I personally prefer my Word with a desktop version even though I know behind the scenes it’s a web app.  I don’t like having to hunt for it among my Chrome tabs.  The Web Application is here to stay.  AI isn’t going to kill the delivery method of SaaS.

 The second is the licensing agreement of getting to use the software.  Instead of a traditional “I bought this copy” structure of my ancient copy of Word, Salesforce’s contracts were to “rent” access to the software based on usage.  This is the business model aspect but it’s also a huge shift in data location and ownership.  It massively reduced the upfront cost of software because you didn’t need to have an on-site IT department building and managing servers for your software.  It also reduces the cost of ownership of software because it’s now a perpetual license with regular upgrades automatically pushed out (for better or for worse).  This structure of licensing gave rise to “per seat” or usage-based fee structures.  And that is what those think pieces say is dying.  Well, that and the valuation of SaaS based companies.

See, if I see you a piece of software with an annual license based on usage I lock you into those reoccurring costs.  This becomes the ARR (annual rate of return) metric I use to valuate my company.  I sell you that flippy disk of Word, you may never come back and buy it again.  Also, my income is bumpy because I get a lot of income when I sell a version but then it peters out over time until I release Microsoft Office 97 (first one with Clippy!), 2000, XP, 2003, 2007, 2010, 2013, 2016, 2019, 2021.  I sell you a perpetual license to Microsoft 365 starting in 2011, it’s going to keep renewing until you cancel. The income from software levels out.  The development cycles become continuous but easier to forecast.  The valuation of my stock goes up because my income v expenses, e.g. my return on investment, is more reliable.

But why will AI kill SaaS? 

Because most SaaS is price on a per seat license.  You have one person sit down at a computer and use Word, you pay for one seat.  You run a company with 1,700 employees and buy Word for all of them, you pay annually for 1,700 seats.  You now start replacing those employees with AI (it seems the headlines on this are a bit inflated) or, more likely, start using an AI agent to directly access information that humans were accessing through your web app (more likely).  The ability to charge a per seat may dramatically change.  Smart SaaS companies will figure out how to iterate their pricing model to protect where they actually have differentiation (data cleanliness and availability, ease of use, integrations) instead of focusing on valuations based on ARR.

Changing Interaction Patterns

In addition to changing the traditional “per seat” pricing model, AI will also change where and how users interact with an application.  For years in software design we have focused on user journeys, interaction language, and how to deliver the right part of the application to the right user at the right time.  Web applications have become increasingly complex over the last two decades, serving multiple different types of users trying to complete different jobs to be done, and providing different navigation and models to route those users.  Immediately, AI changes the complexity of that problem.  Instead of a menu of options, an AI agent can simply ask and then direct users.  Longer term, though, there is a high likelihood that users will stop wanting to login to a specific application to do their task and instead want all their activities centralized within an agent driven workflow.  Why get information for an order from one system and then login to another system to enter that order?  It’s easy to go from that step to users start to fade out of the interaction on some of these activities all together.

How, when, and where users interact with software will fundamentally change.  Right now, there are a lot of hypothesis that the AI companies will own that interaction pattern.  But while AI web browsers like perplexity are winning on their ability to search and shift the interaction pattern, it’s not clear that users will adopt a one-size-fits-all model.  Do you want the information from your Facebook account comingled with your banking details and right next to your work assignment?  At the end of the day, interaction patterns really change because the human users make changes.  APIs fundamentally changed how data is moved between systems and I AI accelerating some of that but I’m more skeptical about humans being willing to use one application for all their online activities.  Google, though, is betting that is the future.

Data and Security

The beating heart of the modern internet is data.  Closely related is all the security needed to keep that data safe.  SaaS as a delivery method moved the ownership of data into the cloud and with it, and APIs, a million integrations to move data from one system to another.  AI consumes that data for everything it does.  The better the data, the better the actions the AI can take.  As Google well knows, control of the data is control of the internet.  To quote the AI summary on the google results “Google’s perspective on data is centered on the belief that data is a vital, transformative resource that, when organized and combined with AI, improves user experiences, enhances business agility, and drives innovation.”

The problem is that most of the companies I have worked at their data isn’t clean.  “Is this true?” is a question humans face every day.  We have the concept of the “database of record” and even then the same question may have different answers based on the context of the person asking (customer, user, employee), the time it is asked, and the way it is asked.  The standardization of APIs resulted in a lot of companies to change the architecture of their applications to make use of a decentralized data source.  AI is going to need a similar kind of transformation, built on top of the work that companies did to use microservices and APIs.  But I have been inside companies where the center of their system is still a mainframe.  They are miles away from AI agents running everything.  Even the most sophisticated companies I have worked for, some that were totally greenfield and some that had spent millions rebuilding their data architecture, aren’t ready for the structures needed to make sure the answer is always the same.  I see this as the real risk to companies undertaking AI initiatives: they will let the technology get ahead of the ability to deliver actual customer needs.

Reducing barriers to entry

At the same time, AI is hugely reducing the barriers to entry.  If we go back to B-school basics, Porter’s five forces model is useful today to understand business pressures: competitive rivals, potential for new entrants, supplier power, customer power, and threats of substitutes.  Companies using AI to increase efficiency become more competitive rivals if you aren’t doing the same.  But if you do use AI, now those AI companies have huge supplier power (something we touch on before).  Killing the SaaS subscription model gives customers more power.  But the biggest threat I see from AI is it has hugely decreased the cost for new entrants and increases the available substitutes to your product.

You build an awesome product, let’s say it’s a word processer.  You have built it up over decades and have a huge following.  Now someone can Vibe Code a replacement in their basement over the weekend.  Okay, that might be an exaggeration but the barriers to entry on most industries, and the cost to build a suitable replacement, are hugely decreased.  Now we need to think about our differentiation moats as something else besides the features of the software.  Which is something we should have always been doing but have gotten lazy.  I worked for a company that did websites for hospitals.  Our moat was not having a better CMS (content management system).  It was that most companies didn’t want to go through the effort of HIPAA compliance and we staffed with people that were expert in a specific industry.  Microsoft’s moat around Word is not that it is the best word processor on the market.  It’s not.  For my creative writing I use Scrivener which is far superior for complex, novel length, writing for publication.  Google Docs is far better for business collaboration.  OnlyOffice and OpenOffice are better for privacy, security, and document formatting control.  Lark is better for integration with chat and project management.  Many of the alternative provide basically free migration through compatibility with docx file formats, removing another barrier.  No, Microsoft’s moat is prevalence through packaging and brand trust.  Pretty much every business uses the Microsoft Office Suite.  Because it’s the default and then comes with all the tools.  Microsoft shouldn’t, and isn’t, rest on that history.  The world is full of examples of once darlings failing to adapt.

All of these factors will create huge challenges for companies now and in the near future.  But AI will not be the cause of death of any company in the near future, it will be the failure of a company to understand their fundamentals, adapt to a changing environment, and keeping their customers central to their goals.  Disruption is default in business.  Adaption is a leader’s job.  AI is your friend.  It’s true, just ask it.  I promise you an interesting philosophical answer.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image. In this case I prompted Claude (Anthropic), Gemini (Google), ChatGPT (OpenAI) and CoPilot (Microsoft/GitHub) by giving it my blog post as well as the image from the previous post I used.  I’m gonna give Claude cred for coming up with multiple good images (seen below) that didn’t really fit the feel I was going to, though I really wanted to use them.  Gemini leaned hard into infographics (which seems to be it’s default mode).  This time CoPilot won.

I also asked them all to give me a title and LinkedIn summary.  Claude, again, won this challenge by asking good questions and effectively finding the thesis.

Claude’s first attempt and second attempts are below. Sadly, it took me a surprising amount of work to figure out how get them from the svg format (which I am sure Claude was doing to make edits easy) to web ready formats… technology’s future is looking bleak…

AI Technology is Interesting; the market conditions can’t persist

Large market disruptions and technological advances have gone hand-in-hand for as long as I can remember.  We have seen massive changes in market fundamentals after the emergence of the Internet.  In 2008, the implosion of the financial markets and OEMs, particularly in housing industry, occurred simultaneously with the emergence of APIs that dramatically changed the way systems could talk to each other.  2020 brought COVID shutdowns, a fundamental change to the way we work, 0% federal interest rates, and an explosion in the number and scale of the startup market.

Smart businesses find ways to leverage disruptions into new markets or advances while insulating themselves from the risks associated with those disruptions.  There is a risk of engaging in new technology (see also all the failed blockchain startups) and waiting too long to adopt.  I have a couple of business leaders I am talking to trying to figure out how to make use of the emergence of AI with all the confusion that goes along with not being deep in the tech weeds (sometime in the last 6 months we started to refer to ML algorithms as AI, which is fascinating).

The difference with AI now is both the speed of its emergence and how much it has had a direct-to-consumer launch without guardrails.  So, let’s talk in plain English, or at least plain Jennie which is close enough, about what’s going on in this AI revolution and how to insulate yourself from risk while taking advantage of emerging opportunities.

Millenium Lifestyle Subsidy

In two times in my working history, the Federal Reserve set interest rates to near-zero: Dec 2008-Dec 2015 during the Great Recession (remember “too big to fail” and poor Lehman Brothers) and March 2020-March 2022 during the COVID pandemic.  I’m not here to debate the Federal Reserve or the macro economic impact of these moves.  Instead, let’s talk about how this impacted business decision making, and is still impacting us with AI adoption.

Suddenly, for a lot of companies, money was essentially free. You are a VC sitting on access to capital and the cost of investing has plummeted, what do you do?  You start investing in a wider portfolio of companies because your risk of choosing wrong, and losing your investment, has gone down.  You are a person with an interesting idea of something to take to market and now you suddenly have a huge inflex of funding opportunities.  Not only did this increase the number of startups in the market, it also fundamentally changed the metrics on which those startups were judged.  Instead of looking for profitability as a major indicator of success, the metrics shifted to adoption.  The goal was attaining customers, get them hooked on the freemium model, and show explosive growth.  Instead of ARR (annual recurring revenue) we started tracking MAU (monthly active users).  It’s not that the goal posts moved, the entire field shifted.

An interesting side effect of this was that the VC money drove down the price of these services, which were often below the cost to serve.  Good examples are transportation gigs like Uber, meal prep kits like Hello Fresh, restaurant food delivery like DoorDash, and coworking like WeWork.  There was an explosion of these services that were tech driven, VC backed, and low cost.  Someone far more clever than I labeled this phenomenon the “Millennial Lifestyle Subsidy” (Derek Thompson, staff writer at the Atlantic was that person: original article in 2019 , follow up article in 2022).  Then once capital got more expensive, VCs started to look for actual returns, the market matured, and inflation/labor costs increased the subsidy dried up and now these services are starting to cost market rate.  Which results, naturally, in an exit of customers followed by an exit of providers.

AI’s Eating All the Capital

The same thing is happening in AI right now.  The cost of data centers, high performance chips, and the very expensive engineers building AI right now is not reflected in the cost of commercially available AI.  This time it’s not because of the low cost of money (as of this writing the Fed just maintained the fed funds rate at 3.5-3.75% with the effective funds rate holding steady at 3.64%… that is a far, far, far place from 0%- 0.25% the quantitative easing in March 2020).  Ith’s because of massive circular financing arrangements between major AI labs, hardware providers, and cloud hyperscalers.  Let me go a little deeper on that.  The big companies that are building the data centers, the companies that build the chips that go into those data centers, and the AI tech companies that will use that compute are basically trading money between themselves to fund these deals (great visual of this).  Which means there is no actual money being made here.  Also, the majority of this technology has a 3 year lifespan, likely less because of Moore’s law, but it’s being depreciated over five to six years.  Many folks have called out this puzzle, in particular The Economist has done a great job of laying it all out.

The commercially available tools will get more expensive.  They must.  It may take years before we fully see this play out. It may also come sooner with both OpenAI (ChatGPT) and Anthropic (Claude) planning on going public this year. But we are already starting to see cracks in OpenAI’s dominance as competition heats up and the return on investment stutters.  Disney pulled out of a licensing deal where the ink was still wet, Walmart shelved their agentic checkout integration after returns didn’t live up to hype, Microsoft restructured their deal to move into a more competitive framing, Nvida’s $100B investment has stalled, and there is the start of user boycots after signing a deal with the US military. All of this breaking news has happened to OpenAI in the last month. However, this isn’t just one company starting to fray, I believe this is early signs of the high investment and low returns of these models starting to wear thin.  There will not be one winner take all in this market, it will be a fragmented offering which will result in a lower market cap/multipliers for all the companies involved.

Real World, Real Applications, Normal People

Okay, Jennie, that’s all interesting and a bit scary about the long-term state of our economy.  And we haven’t even talked about the geopolitical risks of China’s heavy investments, and the vulnerability of the Taiwan based chip development industry.  But what do I, average person, do with all this?

Great question!  Here’s the good news.  Much like the millennial lifestyle subsidy, this all means that the AI technology we are seeing right now is artificially inexpensive.  While it is cheap to experiment, you should experiment using the commercially available tools.  There are tons of articles out there about when you should use Gemini vs ChatGPT vs Claude vs CoPilot.  There are also tons of resources out there for when you should tap industry specific instead of generic models.  It is worth spending some time seeing what these tools can do and what use cases they can easily automate.  But… use caution.

If you have integrated an AI tool into a core business process and then forget how to do it without the AI, you will be in trouble when the price starts skyrocketing.  As with any vendor relationship, you should do your due diligence to understand what kinds of agreements and protections you have from the technology.  If you are building with AI embedded in your software, what kinds of integration layers do you have in place to easily switch vendors or processes to a more traditional automation process if you need to?  At what point in understanding how the tool works, and clearly building the business case for the efficiencies gained, does it make sense to look at owning the means of production instead of relying on a vendor?  E.g. when do you roll your own LLM or voice agent?    All of these set points and risks will be different for different businesses. 

I think it is also super important to think about what AI will convince you that it’s good at but it’s not.  AI doesn’t make decisions.  And AI doesn’t create, it plagiarizes (notably recently the novel Shy Girl was pulled because it was written by AI which was a contractual violation).  Also, I have customers (in B2B environments) starting to push back on vendors that they don’t want their data to be used by AI.

The technology is interesting; the market conditions can’t persist.


I had started this article planning on talking about a bunch of trends I am seeing in the market with AI.  But I so love the millennial lifestyle subsidy aspects of the market conditions right now I got overly verbose.  In simpler words, Jennie geeked out too much.  So, we are gonna call this part one and come back soon with part two.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image. In this case I prompted Claude (Anthropic), Gemini (Google), ChatGPT (OpenAI) and CoPilot (Microsoft/GitHub) by giving it my blog post.  CoPilot hit its limits for creating images because my husband used all our free tokens making complex spreadsheets for fun. I had to give all three multiple chances to create a good header image.  The initial passes were really bad.  Gemini won.  I also asked them all to give me a title and LinkedIn summary.  ChatGPT made me realize my thesis should be my title (my working title was Business Models Meet AI: Millenium Lifestyle Subsidy).  Claude did the best job of asking the right questions and generating a starting LinkedIn summary I iterated on.  Still too many em-dashes (:fist-shaking:).

AI in the Wild: Observations of how “Normal” People are Adopting AI

The other day my book club (the book was Why Fish Don’t Exist by Lulu Miller, highly recommend) descended into a discussion of AI.  It was an interesting group.  I was the only person there who works in tech.  Most of the rest of the group was teachers, white collar professionals, and consultants.  All medium user-level tech-savvy with a clear understanding of navigating the internet but no one was going to explain how a database or algorithm worked.  So, basically, a pretty representative group of the average user for most of the applications I have worked on.

So what were some of the use cases that this sample was relying on AI for?

Search Replacement

The most common use case, by far, was using AI to search for information.  From “what should I make for dinner” to “historical facts about ARPANET” it has become a combination of Google and Wikipedia rabbit holes.  Many people in the group described getting lost in a conversation with AI and going to a lot of wild routes to unusual connections.  This feels like the internet I grew up on, on steroids.  And it’s impossible anymore to search without AI coming back with suggestions.  Google, Microsoft, Amazon, most large companies have replaced their natural search patterns with something powered by an LLM.  Because summarizing information is something AI is really good at.

Life and Social Coach

One member of the group mentioned using AI to come up with comebacks to a bad interaction with a neighbor.  Until her daughter criticized her for the energy use.  I let her know that according to Science Vs. that kind of usage has low environmental impact while videos have a significant cost.  She didn’t realize you can make videos with AI.  This resulted in a lot of discussion of using AI as a Life or Social coach.  Honestly, that is how I am most using AI right now: job coach.  I like having something I can talk through specific situations with where I don’t feel like I am burning out a friend.  It’s not about output, it’s about the things I would have previously just not worked through.  Like the  right way to approach a CEO of a company I want to work at.  And how to think about rejections within 24 hours of an application.  When is the right time to pivot?  It’s like having someone to ask for advice that has a literal encyclopedic knowledge and there are no social consequences of not taking the advice.  I think that’s a pretty cool addition especially for those of use who are neurodiverse.  That mother who asked for comebacks to her neighbor?  She didn’t use any of them.  It let her vent off steam without any social cost.  That seems like a good outcome.  Is that how we should be using this tech?  It doesn’t fundamentally change anything but, honestly, maybe it would make us a better world if we thought a bit more before we reacting to other people.  AI as a meditation supplement?  Introducing a pause before reaction just to see what the LLM says?

I also heard examples of using AI to give directions on the job or help write emails to coworkers.  The latter was not effective because your coworkers can tell it’s not you.  The former is straight up dangerous.  I think it’s really important to understand that AI can’t make decisions.  It can be very convincing but at the end of the day, it’s going to tell you exactly what the most common answer is, not what the right answer is.  One person in the group noted that when it’s a topic, she is knowledgeable on she has observed AI to only be right about 40% of the time.  This is one of the reason I will use the grammar checker in Word to correct my structural problems but I don’t let AI write my blog posts.

Research Assistant

The teachers in the group complained about students feeding homework assignments, especially essays, into AI and then having to use AI detectors to determine which kids were doing this.  Meanwhile, the parents described how their kids would use AI to start a project and then laboriously rewrite what the AI wrote to avoid the detectors.  This whole pattern seems to miss the point of what the AI is good at and what is clearly cheating.  Generating your own ideas and then using AI to help structure those ideas coherently, e.g. having it help create an outline or review and offer revisions, seems like a good use.  Plugging in the prompt and then turning in the paper just seems like a missed opportunity to actually learn the content of the class.  No real difference from just googling the answer and scaping an essay from the internet.  Only it’s way easier now to do that.

What was really interesting in the debate about AI in schools was that, at least with this group, it hasn’t yet matured to the point of teaching children how to make smart use of technology.  There is a lot of reactivity to the tools kids have access to now.  Both of my kids are in Girls Who Code and have played with Scratch.  We have used physical games to understand the logic patterns of coding.  We have a family rule that they have time limits on using device time consuming other people’s content (ug, the Minecraft watching videos) but they can use their devices for creating.  Both have played with Suno to create their own music (one plays guitar, the other sings in the choir).  This is a mature reaction to emergent technologies (if I do so pat myself on the back).  We aren’t there yet in having clear guardrails in our schools or in society.

Another member of the group described her personal use of AI as a research assistant.  She dumps her historical writing and her research into the AI.  She then prompts it so compile what she is looking for in that particular project.  She is training it on her writing style.  She is also checking sources again before publishing anything.  She is having the AI generate a first draft she can react to and adjust.  This feels like the cleanest, real, example of using AI as a research assistant.  I imagine in the past someone would have used a college grad student for exactly this kind of work (pin that destruction of the employment pipeline thought for later).  It’s a fascinating use case.  This is a legit professional using AI to make herself more efficient but not to replace her own thinking or expertise.

Intern

Related to the research assistant, the most common usage I have seen inside companies, and engineering teams, is treating AI as a really capable intern with no business expertise.  It has no context to the environment and it can’t make decisions, but it sure can generate a lot of code.  I have talked to engineers who see a real value from AI in throw away scripts and, in certain very greenfield situations, writing production code with guardrails.  But you can’t unleash it at a legacy system without serious consequences.  So this is back to that pin I put in the research assistant discussion.  I fear that AI is destroying the talent pipeline.  There is a lot you learn in your early years and if we replace 0 years of experience roles, both inside and outside of tech, we miss out on those opportunities to learn and grow our talent naturally.  We will see how this works out in the next few years.

Jennie’s brief diversion into a history lesson

 At  one point in the conversation I finally outted myself as working in tech and noted how strange watching AI use in the “civilian world” has been from where I sit.  AI is one of the first technical tools I have seen in my career that has been broadly launched to the public before it has the natural pressure testing and guardrails introduced by a more technical audience.  It was almost a decade between the birth of the internet (as ARPANET) to when AOL became common place (in the mid to late 90s).  APIs revolutionized the way computer systems could communicate with each other in the early to mid-aughts but they still aren’t something a civilian really can make use of.  Blockchain was born in 2009, but it’s really become the last few years when bitcoin started to become mainstream with platforms like Coinbase in 2012 and the ability to buy them on traditional brokerages, most notably Fidelity or Schwab) not until 2024.

AI has conceptually existed since the 1950s and Deep Blue won that chess competition against Kasparov in 1997.  Machine Learning algorithms have been used by tech teams aggressively for the last two decades.  But generative Ai has exploded in the last year, both inside and outside the software industry.

We are in a fascinating time with this emerging technology and I look forward to more opportunities to observe users and inventive uses.  I look forward to looking back on this blog post with head shaking amusement.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image and as part of searching for historical information (because it’s unavoidable). In this case I prompted Claude, Gemini, and ChatGPT by giving eachmy blog post.  In this case ChatGPT won.  I also asked them all to give me a title and LinkedIn summary.  ChatGPT and Gemini both came up with similar ideas which I iterated from.

ICP Explainer pt 2: Build Their Dreams

One of my favorite product tools, especially when talking to non-product folks, is the Ideal Customer Profile.  That is because it finds that perfect overlap between the deep history of targeted marketing and easy to understand explain metaphors.  Too often when I am talking to business leaders, senior engineers, or customer support staff the theory is too much.  They want a quick mental model they can align their thinking with.  ICP is exactly that.  Also, I get to make fun jokes about Insane Clown Posse.

Fundamentally, an ICP is answering the questions “who are we building this for” and “what problem are we solving for them”.  Which means it is super powerful for when we are having tradeoff or prioritization conversations.  With many of the startup founders I have worked with we have used an ICP to focus limited energy in both product and sales.  With larger companies, I have used to clearly understand who we are saying ‘no’ to and why as well as force conversations about purpose drift.  Super useful.

But also, super dry.

I can give you examples of what an ICP is and then a step-by-step process on how to build them.  I can give you a bunch of anti-patterns and failures with the related cautions.  I can give you all the “it depends” across industries and business models.  And you will get that down there if you want to scroll.  But that’s boring at it doesn’t help with the goal I actually have with most product tools: mental models that help us communicate.

We are writing a novel.  A story of a person with a great need.  One day our main character wakes suddenly up in the middle of the night, sheet damp with sweat and a great anxiety gripping their chest.  They were dreaming of something that would solve their biggest problem, and it was right there, within their grasp.  But as the dream slides away, they are left with a pressing certainty that they will never find the solution.

What is the problem they are having?  What relief will they feel when they solve it?  What are the characteristics of them, as a character, that makes that problem so prominent it is waking them in the middle of the night?  Who are they?  That person is your ideal customer.  They are deeply motivated.  They have a real tangible problem.  And solving that problem will create deep meaningful relief.  The ICP isn’t a mishmash of your existing customer’s characteristics.  It’s not a set of demographics that describes a huge swath of the population (which is somehow always a woman 35-45 with 2 children named Jennifer).  And it can be expressed crisply with clear things it isn’t.

B2B and B2C examples

So, let’s walk through a couple examples and then we can talk about how we build it.

  • Online consumers looking for rare or hard-to-find books (Amazon of the early aughts)
  • Families looking for interactive and collaborative games (Wii)
  • Sales leaders with complex and consultative sales processes looking to organize their sales team’s time and focus (salesforce.com in the early aughts)
  • Hospital and Health System marketers looking for HIPPA-compliant and expert website development and online marketing (CMS company I worked with)
  • Developer-tool companies looking to connect meaningfully with the developer community (social media startup I worked with)

With one of these clear descriptions of who we are serving and what problems we are solving for them I can test out what we should and shouldn’t be doing.  Let’s focus on the complex and consultative sales process tool.  I’m building a product that serves the sales leaders.  I have gotten a request to create marketing attribution fields (where the lead came from).  Do I build it?  Does it help the sales team sell?  Sometimes knowing where the lead came from can help open a conversation.  So I don’t need that information for marketing attribution, I need it to help the sales team start a conversation.  Does it help the sales leadership team?  Yes, knowing which channels are resulting in better sales helps me structure my team’s focus.  So the reason I am building this feature has changed.  I have clarified why I am building a feature (marketing attribution) to the needs of my primary users.

Anti-Patterns

To better understand how to use this tool, it’s important to start with what it is not.  It is not a buyer persona.  This is not the tool you use to fill your sales pipeline or prospect.  Buyer personas are super useful and often contain some of the same demographic and psychographic elements.  However, buyer personas are about who you are targeting with marketing and sales messages.  They will be a lot broader than an ICP because while you can target firmographics (the characters of a company) and demographics it’s a lot harder to target pain points.  Buyer personas are about authority, timeline, budget, and need.  They are a point in time about that sales process.  ICPs are an idealized set of characteristics about the perfect customer, one you will almost certainly never meet.  They are needs forward not personal characteristics forward.

It’s not who we want to be when we grow up.  It’s who we want to be right now.  If you are currently looking at horizon 2 or 3 explorations, you want to develop an ICP specifically for those experiments and test against it to see how you are getting penetration.  Often startups will start with an ICP and test to see if they get traction within that persona.  If they don’t then they tweak the ICP.  Companies with an existing customer base should have an ICP for their core business and then experimental ICPs for potential innovation that are adjacent to the core business.  I sell into hospitals and I want to experiment with selling into other highly regulated businesses, that is a new ICP and I need to better understand the demographics and pains of that new customer set.  Large companies may have multiple ICPs for their core business, like the Fortune 500 company I worked with.  The mistake is muddying them together into one ICP.  That quickly becomes an “everything for everyone” model.  Your ICP must have someone you are saying “no” to.  Sub divide to where you have product/market fit (PMF) even when, often, it is the same product across multiple PMF models.

It is not a current customer profile like account management will use.  Nor will it be user journeys that UXR and product will develop.  This isn’t how the current users interact with the current offerings.  It’s about what the ideal customer needs because of the deep need we are solving for them.

Similarily, in B2B, you must be conscious that your buyer isn’t always your user.  So much like the sales leadership vs sales team example above, the ICP is incorporating all those people together.  The problem you are solving is the intersection of the needs of the buyer and the needs of the user.  You can’t focus all on one or other.

Building an ICP

We are going to go about this in this pattern Pain Points -> Attributes -> Demographics Profile

To build an ICP start by answer the question “What problem are we trying to solve”?  It may be easier to reverse this question and ask “Thinking about my most ideal customer, what problem do they have that they currently find unsolvable”?  Write it all down.  Then go through each problem and statement and ask “do I want to solve this problem?” until you have a set of problem statements (customer pains) that you want to solve.

Now having a problem statement, you can start to think about characteristics of a customer with that problem.  If you have existing customers, this can be looking at commonalities of those customers that are related to the problem statement.  Here is my handy cheat sheet of characteristics to segment based on:

Now you have a profile.  Test it.  Look at each attribute and say “is this really compelling?” and “does this change customer motivation?”.  Must like all marketing profiles become, well, me.  It’s easy to just reduce yourself to the most common demographic properties of your population.  The name Jennifer was really popular in the 80s, women 35-45 have a lot of buying power, and that demographic is really likely to be married with 2 kids.  None of those characteristics are likely to be compelling to any particular problem statement.  I don’t need software to organize my kid’s schedules because I am in my 40s and married.  Having kids is compelling.  But likely being a busy professional and upper income are more impactful on that need.

Application

Once I have a ICP that I have walked around the organization it become a foundation for conversations we have with stakeholders and inside the team.  Does this feature help our ICP?  What are we saying no to?  Do our recent customers look like our ICP?  If there is slide, what is it and does it mean we need to adjust the ICP or we are starting to muddy our value proposition?  How recently have we talked to a customer who looks like our ICP?  What needs did they have that we aren’t currently solving?

Note: I don’t use AI to help write my posts or create example pictures. The segmentation cheat sheet is my creation and is also present in my Build Things People Want talk.  I did use AI  to create the header image, in this case ChatGPT with the prompt “create an image of glenngarry glen ross in the style of insane clown posse”.