AI in the Wild: Observations of how “Normal” People are Adopting AI

The other day my book club (the book was Why Fish Don’t Exist by Lulu Miller, highly recommend) descended into a discussion of AI.  It was an interesting group.  I was the only person there who works in tech.  Most of the rest of the group was teachers, white collar professionals, and consultants.  All medium user-level tech-savvy with a clear understanding of navigating the internet but no one was going to explain how a database or algorithm worked.  So, basically, a pretty representative group of the average user for most of the applications I have worked on.

So what were some of the use cases that this sample was relying on AI for?

Search Replacement

The most common use case, by far, was using AI to search for information.  From “what should I make for dinner” to “historical facts about ARPANET” it has become a combination of Google and Wikipedia rabbit holes.  Many people in the group described getting lost in a conversation with AI and going to a lot of wild routes to unusual connections.  This feels like the internet I grew up on, on steroids.  And it’s impossible anymore to search without AI coming back with suggestions.  Google, Microsoft, Amazon, most large companies have replaced their natural search patterns with something powered by an LLM.  Because summarizing information is something AI is really good at.

Life and Social Coach

One member of the group mentioned using AI to come up with comebacks to a bad interaction with a neighbor.  Until her daughter criticized her for the energy use.  I let her know that according to Science Vs. that kind of usage has low environmental impact while videos have a significant cost.  She didn’t realize you can make videos with AI.  This resulted in a lot of discussion of using AI as a Life or Social coach.  Honestly, that is how I am most using AI right now: job coach.  I like having something I can talk through specific situations with where I don’t feel like I am burning out a friend.  It’s not about output, it’s about the things I would have previously just not worked through.  Like the  right way to approach a CEO of a company I want to work at.  And how to think about rejections within 24 hours of an application.  When is the right time to pivot?  It’s like having someone to ask for advice that has a literal encyclopedic knowledge and there are no social consequences of not taking the advice.  I think that’s a pretty cool addition especially for those of use who are neurodiverse.  That mother who asked for comebacks to her neighbor?  She didn’t use any of them.  It let her vent off steam without any social cost.  That seems like a good outcome.  Is that how we should be using this tech?  It doesn’t fundamentally change anything but, honestly, maybe it would make us a better world if we thought a bit more before we reacting to other people.  AI as a meditation supplement?  Introducing a pause before reaction just to see what the LLM says?

I also heard examples of using AI to give directions on the job or help write emails to coworkers.  The latter was not effective because your coworkers can tell it’s not you.  The former is straight up dangerous.  I think it’s really important to understand that AI can’t make decisions.  It can be very convincing but at the end of the day, it’s going to tell you exactly what the most common answer is, not what the right answer is.  One person in the group noted that when it’s a topic, she is knowledgeable on she has observed AI to only be right about 40% of the time.  This is one of the reason I will use the grammar checker in Word to correct my structural problems but I don’t let AI write my blog posts.

Research Assistant

The teachers in the group complained about students feeding homework assignments, especially essays, into AI and then having to use AI detectors to determine which kids were doing this.  Meanwhile, the parents described how their kids would use AI to start a project and then laboriously rewrite what the AI wrote to avoid the detectors.  This whole pattern seems to miss the point of what the AI is good at and what is clearly cheating.  Generating your own ideas and then using AI to help structure those ideas coherently, e.g. having it help create an outline or review and offer revisions, seems like a good use.  Plugging in the prompt and then turning in the paper just seems like a missed opportunity to actually learn the content of the class.  No real difference from just googling the answer and scaping an essay from the internet.  Only it’s way easier now to do that.

What was really interesting in the debate about AI in schools was that, at least with this group, it hasn’t yet matured to the point of teaching children how to make smart use of technology.  There is a lot of reactivity to the tools kids have access to now.  Both of my kids are in Girls Who Code and have played with Scratch.  We have used physical games to understand the logic patterns of coding.  We have a family rule that they have time limits on using device time consuming other people’s content (ug, the Minecraft watching videos) but they can use their devices for creating.  Both have played with Suno to create their own music (one plays guitar, the other sings in the choir).  This is a mature reaction to emergent technologies (if I do so pat myself on the back).  We aren’t there yet in having clear guardrails in our schools or in society.

Another member of the group described her personal use of AI as a research assistant.  She dumps her historical writing and her research into the AI.  She then prompts it so compile what she is looking for in that particular project.  She is training it on her writing style.  She is also checking sources again before publishing anything.  She is having the AI generate a first draft she can react to and adjust.  This feels like the cleanest, real, example of using AI as a research assistant.  I imagine in the past someone would have used a college grad student for exactly this kind of work (pin that destruction of the employment pipeline thought for later).  It’s a fascinating use case.  This is a legit professional using AI to make herself more efficient but not to replace her own thinking or expertise.

Intern

Related to the research assistant, the most common usage I have seen inside companies, and engineering teams, is treating AI as a really capable intern with no business expertise.  It has no context to the environment and it can’t make decisions, but it sure can generate a lot of code.  I have talked to engineers who see a real value from AI in throw away scripts and, in certain very greenfield situations, writing production code with guardrails.  But you can’t unleash it at a legacy system without serious consequences.  So this is back to that pin I put in the research assistant discussion.  I fear that AI is destroying the talent pipeline.  There is a lot you learn in your early years and if we replace 0 years of experience roles, both inside and outside of tech, we miss out on those opportunities to learn and grow our talent naturally.  We will see how this works out in the next few years.

Jennie’s brief diversion into a history lesson

 At  one point in the conversation I finally outted myself as working in tech and noted how strange watching AI use in the “civilian world” has been from where I sit.  AI is one of the first technical tools I have seen in my career that has been broadly launched to the public before it has the natural pressure testing and guardrails introduced by a more technical audience.  It was almost a decade between the birth of the internet (as ARPANET) to when AOL became common place (in the mid to late 90s).  APIs revolutionized the way computer systems could communicate with each other in the early to mid-aughts but they still aren’t something a civilian really can make use of.  Blockchain was born in 2009, but it’s really become the last few years when bitcoin started to become mainstream with platforms like Coinbase in 2012 and the ability to buy them on traditional brokerages, most notably Fidelity or Schwab) not until 2024.

AI has conceptually existed since the 1950s and Deep Blue won that chess competition against Kasparov in 1997.  Machine Learning algorithms have been used by tech teams aggressively for the last two decades.  But generative Ai has exploded in the last year, both inside and outside the software industry.

We are in a fascinating time with this emerging technology and I look forward to more opportunities to observe users and inventive uses.  I look forward to looking back on this blog post with head shaking amusement.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image and as part of searching for historical information (because it’s unavoidable). In this case I prompted Claude, Gemini, and ChatGPT by giving eachmy blog post.  In this case ChatGPT won.  I also asked them all to give me a title and LinkedIn summary.  ChatGPT and Gemini both came up with similar ideas which I iterated from.

Questions Before Answers

Questions Before Answers

Usually on my desk, prominently displayed beneath my computer monitors, there is a bright purple index card with the words “Questions before Answers” written in my best fancy script.  I have currently misplaced it and need to make a new one.  I feel it’s absence.

After one particularly intense meeting with a stakeholder I walked away from my computer for a bit, breathed deeply, and created this card.  Anyone who has worked with me is probably now thinking “I know exactly who she is talking about” but that’s the funny thing: you don’t.  In every team I have worked with it’s a different difficult stakeholder, even varying day to day, often more than one.  At this company it’s account managers, at that one it was operations, at this one here sales, over there engineering.  I made the card because it’s a reminder of human nature more than a particular stakeholder.  And in that conversation that day, it was me that needed the reminder.

Often in business we are trying to move fast.  We come into a conversation, and someone is presenting something hurting them.  Often, it’s something in the software that is not helping them.  A missing filter, an interaction that isn’t quite right, a missing field, a process that is incomplete.  Behind that is a whole set of business processes that have been built up by people doing their best with incomplete information or operations.  People dealing with reality that we are trying to fit into software.  Behind that is money, customers, demands.  The real pain present, and all the things those people have done to solve their problem before talking to you, short circuits the conversation.  We immediately get into solutions.  “Just make this change”.  Urgency drives and understanding falls by the wayside.

There are three things my kids know before anything else.  The first is that I love them and they are safe.  The second is that anything they want to do requires practice and there are no short cuts.  The third is that we can solve any problem if we all agree first on the problem statement.  Weirdly, all three of those also make it into my workplace.  I’m at least very consistent.  I am very diligent on how to bring those conversations back from “implement this change” to “how can we solve this problem together?”.  How do I do that?  The same way I do with my children.

The first thing I do is work to get down to the root problem.  This includes questions like

  • Can you walk me through that in the application?
  • Please show me what you are doing when that happens.
  • Help me understand what you are doing, what are you trying to do here?
  • Explain it to me like I’m a German Shepherd.

That last one came from one of my favorite engineering teams, it’s great for breaking the tension.

Then I stop and I listen.  I listen for a long time.  I ask follow-up questions.  I don’t try to solve their problem; I just keep coming back to make sure we on the same problem statement.  The most powerful statements to help with this are:

  • What I hear you saying is:
  • I am seeing you do X, am I getting that right?
  • It seems like the current parameters are X, is that right?  Where within that are you hitting a problem?

It is so easy to slip into “if we just change this one thing, it will solve their problem”.  That is quick wins thinking.  And when you can find a quick win like that, celebrate.  Write about it.  Post to LinkedIn.  Take a pause and tell all your friends.  Because those so rarely happen.  If the problems we faced had easy solutions where would the fun in that be?

Only once I can clearly articulate their problem, by asking a lot of questions, do I start to offer any solutions.  And this is the beauty of that bright purple index card.  It’s a reminder to me to stay humble and curious.  Fundamentally, that is what product is.  It’s solving business problems with technology, but we can only do that if we truly understand those problems.


Note: I don’t use AI to help write my posts or create example pictures. I do use AI to create the header image. In this case I prompted both Gemini and ChatGPT by giving it my blog post.  In this case ChatGPT won.

It Depends

Early in my career, I moved into a new company as the only product leader for the engineering, network, and desktop support teams.  I went from having 2 direct reports to (eventually as I rebuilt the team) 21 and department head responsibilities.  Knowing I was going into a leadership position with a team that had had significant management turnover, I started with some small group and one on one meetings to get a sense of where the team was and what they needed.  In one of these meetings, a senior engineering told me that I was there is “help us build things people want”.  She was tired of building products that went unused and were then deemed a “failure”.  The software was built right, the team was high performing, but the gap between what they were building and what the market was buying was enormous.

Throughout the companies I have worked at, teams I have led, and products I have built “build things people want” has continued to be my clarion call.  It has wormed its way into conference talks and blogs.  It has helped me find clarity in difficult conversations with sales and account teams.  And it has become more nuanced and more complex as the challenges of building products has become more complicated.  Who are the people we are building for?  How do we know what they want?  When do we drive based on instinct and when on user data?  What kinds of experiments help us better answer these questions?  What kind of commercialization is required?  When do we trade-off between speed to market (I want it now) and performance and reliability (I want it dependable)?

Recently, I have found another phase slipping into my mouth more and more: “It Depends”.  I have had to shed the comforting clarity that we are here to “build things people want” for the situational awareness, and yes anxiety, that there isn’t a clear right answer most of the time.  What works with one company, customer base, team, market environment, or existing product and tech debt doesn’t easily translate to another.  It is exciting to realize there is no rulebook but also terrifying that each moment is a rebuilding of the rules.  Some things stay the same but the complexity, as it always was, is in the details. 

As product managers it often feels like we spend a lot of time trying to find repeatable processes and the perfect document.  While these things are valuable, it takes us away from those details and nuances.  Perhaps it is simply my dislike of external structure, but I find myself wanting to explore that space of “it depends” more, while still holding on to that drive to “build things people want”.  Because I do think there are mental models and, yes, structures that can help us navigate the ambiguity.  But also, it’s work worth doing and maybe along the way of my own explorations I can help others untangle these knots.  Also, I have been told by my teams before that they miss my “Jennie-isms” once we re no longer working together. So hopefully a little humor and interesting stories along the way as well.