Moats, Models, and the AI Stress Test

AI is fundamentally changing the landscape of software development.  There are risks to the business models and monetization structures behind the companies building the AI superstructures.  There are also fundamental changes to the way we build software and that increased speed of delivery shifting the conversations engineering, product, and the business need to have (which I will put in a pin in and probably take up in a different post).  In this post, I want to talk abouts the impacts to existing software offerings and companies, regardless of whether they start to use AI tooling or AI-first approaches.

Killing SaaS

There are a thousand think pieces out there about how AI will be the death of SaaS.  While I agree with some of their premises, I suspect both the causes and effects are overblown in part for social media consumption.  I think we are also mixing up changing barriers to entry, shifting interaction patterns, business model valuation reevaluation, and software structures needs; pointing at them and call it all “SaaS”.  You’ve come to my blog so you should know, by now, you are up for a history lesson.

We use the term “SaaS” synonymously for both a software delivery method and a pricing model.  But they are very different things.  The software delivery method was a key innovation that enabled the pricing model, but you can have software delivered over the internet without pricing it as a per seat “rental software” model.  So, let’s talk about them separately.

What is SaaS?

The software delivery method is simply a shift from providing software via an installation on a computer to accessed through a web browser.  This was a seismic change in the power of applications, who was bearing the cost of processing, where the data was owned and stored, and the “ownership” of that software.  I used to buy Microsoft Word on a flippy disk and install it on my Intel 486 computer with a 32-bit CPU named “Bride of Frankenstein” (because my brother built it from several other computers).  I could, technically, still find that old disk and attempt to install it on my modern laptop and still have the rights to do it.  That’s because when I saved up my allowance to buy my very first word processor, I was sold a perpetual license to that software represented by that disk and the fancy embossed documents that came with it.  Microsoft built the software and then pressed a certain number of copies of it. 

Starting in the 1960s, folks in computer science started to experiment with the concept of “time-sharing” e.g. allowing multiple users to have access to the mainframe computer remotely.  In the late 1990s, the internet arises, first with the World Wide Web in 1989, the first browser in late 1990, the first webpage (Berners-Lee’s info.cern.ch, a guide to what the WWW was and how to use it) followed by CGI (Common Gateway Interface, a set of rules for how web servers interface with external programs) in 1993.  By 1995, we have JavaScript so developers can add dynamic elements beyond text including input validation and animations (which has been the bane of every developer since) and by 1998 XML which allowed data to be requested from a server in the background (which later became AJAX, the second bane of every developer since).  Also, in the 90s we get the emergency of ASPs (application service providers, the ability for a vendor to host the software) which leads to the 1998 introduction of NetSuite, the first cloud-based business management application.  This means by the end of the 1990s we had the makings of “Web 2.0”:  the interactive internet.

In 1999, Salesforce.com is founded, often credited with being the first “pure” SaaS.  Here’s the thing about Salesforce.  It was (probably) the first web application that was built first as a web application, as opposed to Microsoft Word that started as a desktop application and later transitioned to a hybrid model (desktop + web based).  There was never an “on-prem” version of Salesforce.  It was always designed to only be cloud-based specifically to compete against on-premises competitors.  

What in SaaS is at Risk

There are two things this fundamentally changed. The first was where customers went to engage with their software.  Instead of installing a piece of software on their desktop, customers were accessing the software through the internet.  This increased the risks associated with poor internet access and the costs of personal computers (my 32-bit CPU Bride of Frankenstein wouldn’t even be able to turn on today.  A single tab of LinkedIn is over 1 gig of RAM (someone else’s very opinioned take on this)).  Over the last two decades, there has been an interesting shift back to having a “desktop install” of web-based software.  I personally prefer my Word with a desktop version even though I know behind the scenes it’s a web app.  I don’t like having to hunt for it among my Chrome tabs.  The Web Application is here to stay.  AI isn’t going to kill the delivery method of SaaS.

 The second is the licensing agreement of getting to use the software.  Instead of a traditional “I bought this copy” structure of my ancient copy of Word, Salesforce’s contracts were to “rent” access to the software based on usage.  This is the business model aspect but it’s also a huge shift in data location and ownership.  It massively reduced the upfront cost of software because you didn’t need to have an on-site IT department building and managing servers for your software.  It also reduces the cost of ownership of software because it’s now a perpetual license with regular upgrades automatically pushed out (for better or for worse).  This structure of licensing gave rise to “per seat” or usage-based fee structures.  And that is what those think pieces say is dying.  Well, that and the valuation of SaaS based companies.

See, if I see you a piece of software with an annual license based on usage I lock you into those reoccurring costs.  This becomes the ARR (annual rate of return) metric I use to valuate my company.  I sell you that flippy disk of Word, you may never come back and buy it again.  Also, my income is bumpy because I get a lot of income when I sell a version but then it peters out over time until I release Microsoft Office 97 (first one with Clippy!), 2000, XP, 2003, 2007, 2010, 2013, 2016, 2019, 2021.  I sell you a perpetual license to Microsoft 365 starting in 2011, it’s going to keep renewing until you cancel. The income from software levels out.  The development cycles become continuous but easier to forecast.  The valuation of my stock goes up because my income v expenses, e.g. my return on investment, is more reliable.

But why will AI kill SaaS? 

Because most SaaS is price on a per seat license.  You have one person sit down at a computer and use Word, you pay for one seat.  You run a company with 1,700 employees and buy Word for all of them, you pay annually for 1,700 seats.  You now start replacing those employees with AI (it seems the headlines on this are a bit inflated) or, more likely, start using an AI agent to directly access information that humans were accessing through your web app (more likely).  The ability to charge a per seat may dramatically change.  Smart SaaS companies will figure out how to iterate their pricing model to protect where they actually have differentiation (data cleanliness and availability, ease of use, integrations) instead of focusing on valuations based on ARR.

Changing Interaction Patterns

In addition to changing the traditional “per seat” pricing model, AI will also change where and how users interact with an application.  For years in software design we have focused on user journeys, interaction language, and how to deliver the right part of the application to the right user at the right time.  Web applications have become increasingly complex over the last two decades, serving multiple different types of users trying to complete different jobs to be done, and providing different navigation and models to route those users.  Immediately, AI changes the complexity of that problem.  Instead of a menu of options, an AI agent can simply ask and then direct users.  Longer term, though, there is a high likelihood that users will stop wanting to login to a specific application to do their task and instead want all their activities centralized within an agent driven workflow.  Why get information for an order from one system and then login to another system to enter that order?  It’s easy to go from that step to users start to fade out of the interaction on some of these activities all together.

How, when, and where users interact with software will fundamentally change.  Right now, there are a lot of hypothesis that the AI companies will own that interaction pattern.  But while AI web browsers like perplexity are winning on their ability to search and shift the interaction pattern, it’s not clear that users will adopt a one-size-fits-all model.  Do you want the information from your Facebook account comingled with your banking details and right next to your work assignment?  At the end of the day, interaction patterns really change because the human users make changes.  APIs fundamentally changed how data is moved between systems and I AI accelerating some of that but I’m more skeptical about humans being willing to use one application for all their online activities.  Google, though, is betting that is the future.

Data and Security

The beating heart of the modern internet is data.  Closely related is all the security needed to keep that data safe.  SaaS as a delivery method moved the ownership of data into the cloud and with it, and APIs, a million integrations to move data from one system to another.  AI consumes that data for everything it does.  The better the data, the better the actions the AI can take.  As Google well knows, control of the data is control of the internet.  To quote the AI summary on the google results “Google’s perspective on data is centered on the belief that data is a vital, transformative resource that, when organized and combined with AI, improves user experiences, enhances business agility, and drives innovation.”

The problem is that most of the companies I have worked at their data isn’t clean.  “Is this true?” is a question humans face every day.  We have the concept of the “database of record” and even then the same question may have different answers based on the context of the person asking (customer, user, employee), the time it is asked, and the way it is asked.  The standardization of APIs resulted in a lot of companies to change the architecture of their applications to make use of a decentralized data source.  AI is going to need a similar kind of transformation, built on top of the work that companies did to use microservices and APIs.  But I have been inside companies where the center of their system is still a mainframe.  They are miles away from AI agents running everything.  Even the most sophisticated companies I have worked for, some that were totally greenfield and some that had spent millions rebuilding their data architecture, aren’t ready for the structures needed to make sure the answer is always the same.  I see this as the real risk to companies undertaking AI initiatives: they will let the technology get ahead of the ability to deliver actual customer needs.

Reducing barriers to entry

At the same time, AI is hugely reducing the barriers to entry.  If we go back to B-school basics, Porter’s five forces model is useful today to understand business pressures: competitive rivals, potential for new entrants, supplier power, customer power, and threats of substitutes.  Companies using AI to increase efficiency become more competitive rivals if you aren’t doing the same.  But if you do use AI, now those AI companies have huge supplier power (something we touch on before).  Killing the SaaS subscription model gives customers more power.  But the biggest threat I see from AI is it has hugely decreased the cost for new entrants and increases the available substitutes to your product.

You build an awesome product, let’s say it’s a word processer.  You have built it up over decades and have a huge following.  Now someone can Vibe Code a replacement in their basement over the weekend.  Okay, that might be an exaggeration but the barriers to entry on most industries, and the cost to build a suitable replacement, are hugely decreased.  Now we need to think about our differentiation moats as something else besides the features of the software.  Which is something we should have always been doing but have gotten lazy.  I worked for a company that did websites for hospitals.  Our moat was not having a better CMS (content management system).  It was that most companies didn’t want to go through the effort of HIPAA compliance and we staffed with people that were expert in a specific industry.  Microsoft’s moat around Word is not that it is the best word processor on the market.  It’s not.  For my creative writing I use Scrivener which is far superior for complex, novel length, writing for publication.  Google Docs is far better for business collaboration.  OnlyOffice and OpenOffice are better for privacy, security, and document formatting control.  Lark is better for integration with chat and project management.  Many of the alternative provide basically free migration through compatibility with docx file formats, removing another barrier.  No, Microsoft’s moat is prevalence through packaging and brand trust.  Pretty much every business uses the Microsoft Office Suite.  Because it’s the default and then comes with all the tools.  Microsoft shouldn’t, and isn’t, rest on that history.  The world is full of examples of once darlings failing to adapt.

All of these factors will create huge challenges for companies now and in the near future.  But AI will not be the cause of death of any company in the near future, it will be the failure of a company to understand their fundamentals, adapt to a changing environment, and keeping their customers central to their goals.  Disruption is default in business.  Adaption is a leader’s job.  AI is your friend.  It’s true, just ask it.  I promise you an interesting philosophical answer.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image. In this case I prompted Claude (Anthropic), Gemini (Google), ChatGPT (OpenAI) and CoPilot (Microsoft/GitHub) by giving it my blog post as well as the image from the previous post I used.  I’m gonna give Claude cred for coming up with multiple good images (seen below) that didn’t really fit the feel I was going to, though I really wanted to use them.  Gemini leaned hard into infographics (which seems to be it’s default mode).  This time CoPilot won.

I also asked them all to give me a title and LinkedIn summary.  Claude, again, won this challenge by asking good questions and effectively finding the thesis.

Claude’s first attempt and second attempts are below. Sadly, it took me a surprising amount of work to figure out how get them from the svg format (which I am sure Claude was doing to make edits easy) to web ready formats… technology’s future is looking bleak…

AI Technology is Interesting; the market conditions can’t persist

Large market disruptions and technological advances have gone hand-in-hand for as long as I can remember.  We have seen massive changes in market fundamentals after the emergence of the Internet.  In 2008, the implosion of the financial markets and OEMs, particularly in housing industry, occurred simultaneously with the emergence of APIs that dramatically changed the way systems could talk to each other.  2020 brought COVID shutdowns, a fundamental change to the way we work, 0% federal interest rates, and an explosion in the number and scale of the startup market.

Smart businesses find ways to leverage disruptions into new markets or advances while insulating themselves from the risks associated with those disruptions.  There is a risk of engaging in new technology (see also all the failed blockchain startups) and waiting too long to adopt.  I have a couple of business leaders I am talking to trying to figure out how to make use of the emergence of AI with all the confusion that goes along with not being deep in the tech weeds (sometime in the last 6 months we started to refer to ML algorithms as AI, which is fascinating).

The difference with AI now is both the speed of its emergence and how much it has had a direct-to-consumer launch without guardrails.  So, let’s talk in plain English, or at least plain Jennie which is close enough, about what’s going on in this AI revolution and how to insulate yourself from risk while taking advantage of emerging opportunities.

Millenium Lifestyle Subsidy

In two times in my working history, the Federal Reserve set interest rates to near-zero: Dec 2008-Dec 2015 during the Great Recession (remember “too big to fail” and poor Lehman Brothers) and March 2020-March 2022 during the COVID pandemic.  I’m not here to debate the Federal Reserve or the macro economic impact of these moves.  Instead, let’s talk about how this impacted business decision making, and is still impacting us with AI adoption.

Suddenly, for a lot of companies, money was essentially free. You are a VC sitting on access to capital and the cost of investing has plummeted, what do you do?  You start investing in a wider portfolio of companies because your risk of choosing wrong, and losing your investment, has gone down.  You are a person with an interesting idea of something to take to market and now you suddenly have a huge inflex of funding opportunities.  Not only did this increase the number of startups in the market, it also fundamentally changed the metrics on which those startups were judged.  Instead of looking for profitability as a major indicator of success, the metrics shifted to adoption.  The goal was attaining customers, get them hooked on the freemium model, and show explosive growth.  Instead of ARR (annual recurring revenue) we started tracking MAU (monthly active users).  It’s not that the goal posts moved, the entire field shifted.

An interesting side effect of this was that the VC money drove down the price of these services, which were often below the cost to serve.  Good examples are transportation gigs like Uber, meal prep kits like Hello Fresh, restaurant food delivery like DoorDash, and coworking like WeWork.  There was an explosion of these services that were tech driven, VC backed, and low cost.  Someone far more clever than I labeled this phenomenon the “Millennial Lifestyle Subsidy” (Derek Thompson, staff writer at the Atlantic was that person: original article in 2019 , follow up article in 2022).  Then once capital got more expensive, VCs started to look for actual returns, the market matured, and inflation/labor costs increased the subsidy dried up and now these services are starting to cost market rate.  Which results, naturally, in an exit of customers followed by an exit of providers.

AI’s Eating All the Capital

The same thing is happening in AI right now.  The cost of data centers, high performance chips, and the very expensive engineers building AI right now is not reflected in the cost of commercially available AI.  This time it’s not because of the low cost of money (as of this writing the Fed just maintained the fed funds rate at 3.5-3.75% with the effective funds rate holding steady at 3.64%… that is a far, far, far place from 0%- 0.25% the quantitative easing in March 2020).  Ith’s because of massive circular financing arrangements between major AI labs, hardware providers, and cloud hyperscalers.  Let me go a little deeper on that.  The big companies that are building the data centers, the companies that build the chips that go into those data centers, and the AI tech companies that will use that compute are basically trading money between themselves to fund these deals (great visual of this).  Which means there is no actual money being made here.  Also, the majority of this technology has a 3 year lifespan, likely less because of Moore’s law, but it’s being depreciated over five to six years.  Many folks have called out this puzzle, in particular The Economist has done a great job of laying it all out.

The commercially available tools will get more expensive.  They must.  It may take years before we fully see this play out. It may also come sooner with both OpenAI (ChatGPT) and Anthropic (Claude) planning on going public this year. But we are already starting to see cracks in OpenAI’s dominance as competition heats up and the return on investment stutters.  Disney pulled out of a licensing deal where the ink was still wet, Walmart shelved their agentic checkout integration after returns didn’t live up to hype, Microsoft restructured their deal to move into a more competitive framing, Nvida’s $100B investment has stalled, and there is the start of user boycots after signing a deal with the US military. All of this breaking news has happened to OpenAI in the last month. However, this isn’t just one company starting to fray, I believe this is early signs of the high investment and low returns of these models starting to wear thin.  There will not be one winner take all in this market, it will be a fragmented offering which will result in a lower market cap/multipliers for all the companies involved.

Real World, Real Applications, Normal People

Okay, Jennie, that’s all interesting and a bit scary about the long-term state of our economy.  And we haven’t even talked about the geopolitical risks of China’s heavy investments, and the vulnerability of the Taiwan based chip development industry.  But what do I, average person, do with all this?

Great question!  Here’s the good news.  Much like the millennial lifestyle subsidy, this all means that the AI technology we are seeing right now is artificially inexpensive.  While it is cheap to experiment, you should experiment using the commercially available tools.  There are tons of articles out there about when you should use Gemini vs ChatGPT vs Claude vs CoPilot.  There are also tons of resources out there for when you should tap industry specific instead of generic models.  It is worth spending some time seeing what these tools can do and what use cases they can easily automate.  But… use caution.

If you have integrated an AI tool into a core business process and then forget how to do it without the AI, you will be in trouble when the price starts skyrocketing.  As with any vendor relationship, you should do your due diligence to understand what kinds of agreements and protections you have from the technology.  If you are building with AI embedded in your software, what kinds of integration layers do you have in place to easily switch vendors or processes to a more traditional automation process if you need to?  At what point in understanding how the tool works, and clearly building the business case for the efficiencies gained, does it make sense to look at owning the means of production instead of relying on a vendor?  E.g. when do you roll your own LLM or voice agent?    All of these set points and risks will be different for different businesses. 

I think it is also super important to think about what AI will convince you that it’s good at but it’s not.  AI doesn’t make decisions.  And AI doesn’t create, it plagiarizes (notably recently the novel Shy Girl was pulled because it was written by AI which was a contractual violation).  Also, I have customers (in B2B environments) starting to push back on vendors that they don’t want their data to be used by AI.

The technology is interesting; the market conditions can’t persist.


I had started this article planning on talking about a bunch of trends I am seeing in the market with AI.  But I so love the millennial lifestyle subsidy aspects of the market conditions right now I got overly verbose.  In simpler words, Jennie geeked out too much.  So, we are gonna call this part one and come back soon with part two.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image. In this case I prompted Claude (Anthropic), Gemini (Google), ChatGPT (OpenAI) and CoPilot (Microsoft/GitHub) by giving it my blog post.  CoPilot hit its limits for creating images because my husband used all our free tokens making complex spreadsheets for fun. I had to give all three multiple chances to create a good header image.  The initial passes were really bad.  Gemini won.  I also asked them all to give me a title and LinkedIn summary.  ChatGPT made me realize my thesis should be my title (my working title was Business Models Meet AI: Millenium Lifestyle Subsidy).  Claude did the best job of asking the right questions and generating a starting LinkedIn summary I iterated on.  Still too many em-dashes (:fist-shaking:).

AI in the Wild: Observations of how “Normal” People are Adopting AI

The other day my book club (the book was Why Fish Don’t Exist by Lulu Miller, highly recommend) descended into a discussion of AI.  It was an interesting group.  I was the only person there who works in tech.  Most of the rest of the group was teachers, white collar professionals, and consultants.  All medium user-level tech-savvy with a clear understanding of navigating the internet but no one was going to explain how a database or algorithm worked.  So, basically, a pretty representative group of the average user for most of the applications I have worked on.

So what were some of the use cases that this sample was relying on AI for?

Search Replacement

The most common use case, by far, was using AI to search for information.  From “what should I make for dinner” to “historical facts about ARPANET” it has become a combination of Google and Wikipedia rabbit holes.  Many people in the group described getting lost in a conversation with AI and going to a lot of wild routes to unusual connections.  This feels like the internet I grew up on, on steroids.  And it’s impossible anymore to search without AI coming back with suggestions.  Google, Microsoft, Amazon, most large companies have replaced their natural search patterns with something powered by an LLM.  Because summarizing information is something AI is really good at.

Life and Social Coach

One member of the group mentioned using AI to come up with comebacks to a bad interaction with a neighbor.  Until her daughter criticized her for the energy use.  I let her know that according to Science Vs. that kind of usage has low environmental impact while videos have a significant cost.  She didn’t realize you can make videos with AI.  This resulted in a lot of discussion of using AI as a Life or Social coach.  Honestly, that is how I am most using AI right now: job coach.  I like having something I can talk through specific situations with where I don’t feel like I am burning out a friend.  It’s not about output, it’s about the things I would have previously just not worked through.  Like the  right way to approach a CEO of a company I want to work at.  And how to think about rejections within 24 hours of an application.  When is the right time to pivot?  It’s like having someone to ask for advice that has a literal encyclopedic knowledge and there are no social consequences of not taking the advice.  I think that’s a pretty cool addition especially for those of use who are neurodiverse.  That mother who asked for comebacks to her neighbor?  She didn’t use any of them.  It let her vent off steam without any social cost.  That seems like a good outcome.  Is that how we should be using this tech?  It doesn’t fundamentally change anything but, honestly, maybe it would make us a better world if we thought a bit more before we reacting to other people.  AI as a meditation supplement?  Introducing a pause before reaction just to see what the LLM says?

I also heard examples of using AI to give directions on the job or help write emails to coworkers.  The latter was not effective because your coworkers can tell it’s not you.  The former is straight up dangerous.  I think it’s really important to understand that AI can’t make decisions.  It can be very convincing but at the end of the day, it’s going to tell you exactly what the most common answer is, not what the right answer is.  One person in the group noted that when it’s a topic, she is knowledgeable on she has observed AI to only be right about 40% of the time.  This is one of the reason I will use the grammar checker in Word to correct my structural problems but I don’t let AI write my blog posts.

Research Assistant

The teachers in the group complained about students feeding homework assignments, especially essays, into AI and then having to use AI detectors to determine which kids were doing this.  Meanwhile, the parents described how their kids would use AI to start a project and then laboriously rewrite what the AI wrote to avoid the detectors.  This whole pattern seems to miss the point of what the AI is good at and what is clearly cheating.  Generating your own ideas and then using AI to help structure those ideas coherently, e.g. having it help create an outline or review and offer revisions, seems like a good use.  Plugging in the prompt and then turning in the paper just seems like a missed opportunity to actually learn the content of the class.  No real difference from just googling the answer and scaping an essay from the internet.  Only it’s way easier now to do that.

What was really interesting in the debate about AI in schools was that, at least with this group, it hasn’t yet matured to the point of teaching children how to make smart use of technology.  There is a lot of reactivity to the tools kids have access to now.  Both of my kids are in Girls Who Code and have played with Scratch.  We have used physical games to understand the logic patterns of coding.  We have a family rule that they have time limits on using device time consuming other people’s content (ug, the Minecraft watching videos) but they can use their devices for creating.  Both have played with Suno to create their own music (one plays guitar, the other sings in the choir).  This is a mature reaction to emergent technologies (if I do so pat myself on the back).  We aren’t there yet in having clear guardrails in our schools or in society.

Another member of the group described her personal use of AI as a research assistant.  She dumps her historical writing and her research into the AI.  She then prompts it so compile what she is looking for in that particular project.  She is training it on her writing style.  She is also checking sources again before publishing anything.  She is having the AI generate a first draft she can react to and adjust.  This feels like the cleanest, real, example of using AI as a research assistant.  I imagine in the past someone would have used a college grad student for exactly this kind of work (pin that destruction of the employment pipeline thought for later).  It’s a fascinating use case.  This is a legit professional using AI to make herself more efficient but not to replace her own thinking or expertise.

Intern

Related to the research assistant, the most common usage I have seen inside companies, and engineering teams, is treating AI as a really capable intern with no business expertise.  It has no context to the environment and it can’t make decisions, but it sure can generate a lot of code.  I have talked to engineers who see a real value from AI in throw away scripts and, in certain very greenfield situations, writing production code with guardrails.  But you can’t unleash it at a legacy system without serious consequences.  So this is back to that pin I put in the research assistant discussion.  I fear that AI is destroying the talent pipeline.  There is a lot you learn in your early years and if we replace 0 years of experience roles, both inside and outside of tech, we miss out on those opportunities to learn and grow our talent naturally.  We will see how this works out in the next few years.

Jennie’s brief diversion into a history lesson

 At  one point in the conversation I finally outted myself as working in tech and noted how strange watching AI use in the “civilian world” has been from where I sit.  AI is one of the first technical tools I have seen in my career that has been broadly launched to the public before it has the natural pressure testing and guardrails introduced by a more technical audience.  It was almost a decade between the birth of the internet (as ARPANET) to when AOL became common place (in the mid to late 90s).  APIs revolutionized the way computer systems could communicate with each other in the early to mid-aughts but they still aren’t something a civilian really can make use of.  Blockchain was born in 2009, but it’s really become the last few years when bitcoin started to become mainstream with platforms like Coinbase in 2012 and the ability to buy them on traditional brokerages, most notably Fidelity or Schwab) not until 2024.

AI has conceptually existed since the 1950s and Deep Blue won that chess competition against Kasparov in 1997.  Machine Learning algorithms have been used by tech teams aggressively for the last two decades.  But generative Ai has exploded in the last year, both inside and outside the software industry.

We are in a fascinating time with this emerging technology and I look forward to more opportunities to observe users and inventive uses.  I look forward to looking back on this blog post with head shaking amusement.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image and as part of searching for historical information (because it’s unavoidable). In this case I prompted Claude, Gemini, and ChatGPT by giving eachmy blog post.  In this case ChatGPT won.  I also asked them all to give me a title and LinkedIn summary.  ChatGPT and Gemini both came up with similar ideas which I iterated from.

Questions Before Answers

Questions Before Answers

Usually on my desk, prominently displayed beneath my computer monitors, there is a bright purple index card with the words “Questions before Answers” written in my best fancy script.  I have currently misplaced it and need to make a new one.  I feel it’s absence.

After one particularly intense meeting with a stakeholder I walked away from my computer for a bit, breathed deeply, and created this card.  Anyone who has worked with me is probably now thinking “I know exactly who she is talking about” but that’s the funny thing: you don’t.  In every team I have worked with it’s a different difficult stakeholder, even varying day to day, often more than one.  At this company it’s account managers, at that one it was operations, at this one here sales, over there engineering.  I made the card because it’s a reminder of human nature more than a particular stakeholder.  And in that conversation that day, it was me that needed the reminder.

Often in business we are trying to move fast.  We come into a conversation, and someone is presenting something hurting them.  Often, it’s something in the software that is not helping them.  A missing filter, an interaction that isn’t quite right, a missing field, a process that is incomplete.  Behind that is a whole set of business processes that have been built up by people doing their best with incomplete information or operations.  People dealing with reality that we are trying to fit into software.  Behind that is money, customers, demands.  The real pain present, and all the things those people have done to solve their problem before talking to you, short circuits the conversation.  We immediately get into solutions.  “Just make this change”.  Urgency drives and understanding falls by the wayside.

There are three things my kids know before anything else.  The first is that I love them and they are safe.  The second is that anything they want to do requires practice and there are no short cuts.  The third is that we can solve any problem if we all agree first on the problem statement.  Weirdly, all three of those also make it into my workplace.  I’m at least very consistent.  I am very diligent on how to bring those conversations back from “implement this change” to “how can we solve this problem together?”.  How do I do that?  The same way I do with my children.

The first thing I do is work to get down to the root problem.  This includes questions like

  • Can you walk me through that in the application?
  • Please show me what you are doing when that happens.
  • Help me understand what you are doing, what are you trying to do here?
  • Explain it to me like I’m a German Shepherd.

That last one came from one of my favorite engineering teams, it’s great for breaking the tension.

Then I stop and I listen.  I listen for a long time.  I ask follow-up questions.  I don’t try to solve their problem; I just keep coming back to make sure we on the same problem statement.  The most powerful statements to help with this are:

  • What I hear you saying is:
  • I am seeing you do X, am I getting that right?
  • It seems like the current parameters are X, is that right?  Where within that are you hitting a problem?

It is so easy to slip into “if we just change this one thing, it will solve their problem”.  That is quick wins thinking.  And when you can find a quick win like that, celebrate.  Write about it.  Post to LinkedIn.  Take a pause and tell all your friends.  Because those so rarely happen.  If the problems we faced had easy solutions where would the fun in that be?

Only once I can clearly articulate their problem, by asking a lot of questions, do I start to offer any solutions.  And this is the beauty of that bright purple index card.  It’s a reminder to me to stay humble and curious.  Fundamentally, that is what product is.  It’s solving business problems with technology, but we can only do that if we truly understand those problems.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image. In this case I prompted both Gemini and ChatGPT by giving it my blog post.  In this case ChatGPT won.

CodeMash Conference 2023 – Consuming Endangered Pachyderms

Slides

Blogs on related topics

Mental Model Minute: Seven Levels of Authority

We are all growing and learning as professionals in a collaborative workspace. It can be helpful to take a step back and look at different models of individual and team behavior. A “mental model minute” takes just a moment to describe a different view on how to perform or interact better that we can react…

The Next Right Thing

You’ve seen Frozen 2, right?  I mean, if your household is anything like mine, it’s playing on repeat in the background as children scramble, color, and fight.  Admittedly, I have only sat and watched it end to end once.  Still, there is one part of the movie that stick out to me and, I think,…

Build Things People Want

When I started at my current company, the mandate I was given by the developers was to help them build things people want.  It is something I have heard over and over again from developers throughout my career.  And frankly, it’s a goal that the organization and the developers should be aligned on.  The business…

Also check out other presentations.

CodeMash Conference 2023 – The Interview Lab

Slides

Blogs on related topics

Processes for Establishing Teams

Often when I am coming into a new team much of the work I am doing, while I get up to speed on the product and market, is helping the team focus and work together.  Over the years I have come across, modified, and developed several tools to help facilitate establishing teams, norms, and roles. As…

Mental Model Minute: Intent Based Leadership

I have so many things to share with you but so many discussions I have been having keep leading me back to this Mental Model Moment I thought I would share. And it’s easy, because it’s the author and this great illustrator do so much of the work for me. On the other side I…

Remote Work is Here to Stay

As operations leaders across the world stumble and struggle to figure out their “return to work” strategies, I have been reflecting on the nature of the fundamentally collaborative work we do in software development and what it will look like in this new world.  I don’t know what the post-Covid world of work will look…

Also check out other presentations.

Processes for Establishing Teams

Often when I am coming into a new team much of the work I am doing, while I get up to speed on the product and market, is helping the team focus and work together.  Over the years I have come across, modified, and developed several tools to help facilitate establishing teams, norms, and roles. As always, no tool is exactly right for all teams. Adapt and change for the nature of the team you are dealing with.

Norms

Goal

Establish a common set of team level rules for how we collaborate and interact.  This can be everything from SLAs around PRs to the number of meetings we have, from how we handle conflict to support rotations.

This process should be team led.  The team leader should be a participant in the discussion.  As such it can be very difficult for the team leader to also facilitate the discussion.  A team norms discussion also becomes a way the team actively works through their communication and conflict management style.  If possible, it is better to do this synchronously.  The conversation is part of the point.  Approximately an hour should suffice for this discussion.  Some teams can do multiple exercises (norms + mandate or norms + decision making) in one session.  Once norms are established, they should be updated over time.

Process

Let the team know you will be doing an exercise so they can think about it in advance.  The prompt is “Think about a team you worked on that was great.  What made it great?  Think about a team you worked on that was terrible.  What made it terrible?”  Capture one thought per sticky.

Then group stickies together into themes.  Talk through the stickies in each theme together as a group.  As a facilitator make sure that people are engaging in debate on whether that idea is important to them as a team or not.  This will allow you to come to a set of statements that basically boil down to “We believe…”

Examples

From a very mature team with a lot of baggage, this set of norms evolved over the 2 years the team existed. This example is only a subset of the total norms the team had. They did a remarkable job of holding each other accountable by simply saying “thank you for following our team norms”.

At a very different company, a very different result. This was a team coming out of a lot of process whiplash.  Also, they were globally distributed (designer in India, team lead in South Africa) so there was a lot of concern for async communication and clear hand offs.

Roles and Responsibilities

Goal

Clearly define who owns what part of a process or has decision making power is essential to clean hand offs and action.  While many activities fall clearly into one job title (HR is a great example of a job with clear boundaries) in the space between Product and Engineer, or between levels in an organization, things can be a lot murkier.

This process can either be leader with team or leaders among themselves depending on where you are seeing grey areas.  It is best that anyone who’s position is being defined is part of the process.  This process requires a willingness to engage in some difficult discussions about both skills and responsibilities and can become heated so it’s best to have someone with position power that can break an impasse.

This process should be done in conjunction with establishing a DACI process or establishing a 7 levels of authority so you can clearly talk about both the “owner” of a process and who should be involved in the decision and how involved.

Process

This is a classic card sort exercise focused on job responsibilities.  This should happen in two phases.

Phase 1: Collect responsibilities

Bring the affected group together to collect all the responsibilities that need to be defined and owned.  Working alone capture one responsibility per sticky.  This can be as simple as “review a PR”, as mundane as “schedule meetings”, or as complex as “architectural direction”.  Set a timer for the independent brainstorming.  At the end of that time, go through all the items and group them and chose a “winning” card to represent each group of ideas.  Take these cards to a new clean area.

Phase 2: Card sort

Have boxes for each of the roles you are trying to define.  Together as a team determine which box each sticky belongs in.  If the sticky belongs in multiple boxes, duplicate it.  If a sticky represents a “decision owner”

Example

Most of this kind of discussion I have done has been in person, so I don’t have a captured shot.  This is a heavily cleaned up example from our process of hiring a Head of Eng and establishing teams with a “pod captain” for each.

Decision making structure

Goal

Sometimes it is helpful to separate the Roles and Responsibilities discussion from the decision-making structure.  This can especially be true when there are a lot of people involved in decision making and it can be separate from the actual job responsibilities.  This has been the case for my current role as we have moved from a single Engineer team of 25 to 5 pods with pod captains.

Process

Same as ‘roles and responsibilities’ card sort with the initial part focused on decision making stages.  In our process we also added in the variable of where and how the decision should be made and SLA because of how async we are.

Example